Refael Tikochinski Profile
Refael Tikochinski

@R_Tikochinski

Followers
37
Following
5
Media
10
Statuses
17

NLP and Neuroscience PhD student

Joined September 2011
Don't wanna be here? Send us removal request.
@R_Tikochinski
Refael Tikochinski
7 months
Together, our findings reveal how the brain’s hierarchical temporal processing mechanisms enable the flexible integration of information over time, providing valuable insights for both cognitive neuroscience and AI development.
0
0
1
@R_Tikochinski
Refael Tikochinski
7 months
Finally, we used complementary spectral analysis to map cortical areas on a scale ranging from short- to long-term contextual processing. We found that the more dominant a brain area is in low frequencies, the better its signal was predicted by our novel model, and vice versa.
Tweet media one
1
0
2
@R_Tikochinski
Refael Tikochinski
7 months
This model significantly enhances the prediction of neural activity in higher-order regions involved in long-timescale processing.
Tweet media one
1
0
0
@R_Tikochinski
Refael Tikochinski
7 months
Next, we introduce an alternative LLM-based incremental-context model that combines incoming short-term context with an aggregated, dynamically updated summary of prior context.
Tweet media one
1
0
1
@R_Tikochinski
Refael Tikochinski
7 months
Using fMRI data from 219 participants listening to spoken narratives, we first demonstrate that LLMs predict brain activity effectively only when using short contextual windows of up to a few dozen words.
Tweet media one
1
0
1
@R_Tikochinski
Refael Tikochinski
7 months
In this study, we show how the brain—unlike LLMs that process large text windows in parallel—integrates short-term and long-term contextual information through an incremental mechanism.
Tweet media one
1
0
1
@R_Tikochinski
Refael Tikochinski
7 months
Very excited to share our new paper published in Nature Communications @NatureComms (link below). This work is part of my PhD research under the supervision of @roireichart (Technion), @HassonUri (@HassonLab), and @ArielYGoldstein, in collaboration with @YoavMeiri.
1
3
9
@R_Tikochinski
Refael Tikochinski
2 years
Our paper has been accepted for publication in Cerebral Cortex. Here is the link: . The latest version is also available for free in our bioarxiv link below. @roireichart @ArielYGoldstein @HassonLab @YeshurunYaara.
@R_Tikochinski
Refael Tikochinski
4 years
Very excited to share our new preprint, part of my PhD research under the supervision of @roireichart (Technion), @HassonUri and @ArielYGoldstein (@HassonLab), and in collaboration with @YeshurunYaara.
0
3
8
@R_Tikochinski
Refael Tikochinski
4 years
Overall, our results show that fine-tuning of deep language models is a reliable and cognitively plausible model for the way people adapts to changes in perspectives and contexts in the natural world.
0
0
0
@R_Tikochinski
Refael Tikochinski
4 years
In addition, we also found a strong correlation between the magnitude of the models’ representation distance and the expected differences in interpretation among listeners from different groups - as judged externally by independent raters.
Tweet media one
1
0
0
@R_Tikochinski
Refael Tikochinski
4 years
Second, we show that the degree of difference between the listeners’ neural signals (measured voxel-wised via euclidean distance) can be approximated using the distances between the representations of .the story extracted from the fine-tuned models.
Tweet media one
1
0
1
@R_Tikochinski
Refael Tikochinski
4 years
First, we showed that we can use models’ representations to successfully predict the context in which a subject interpreted the story, by simply inspecting which model better predicts his neural signal in different voxels related to language-related brain areas.
Tweet media one
1
0
1
@R_Tikochinski
Refael Tikochinski
4 years
From each fine-tuned model we extracted word embedding representations of the story and demonstrated the association between those representations and the listeners’ neural responses of the corresponding groups using two different analyses:
Tweet media one
1
0
0
@R_Tikochinski
Refael Tikochinski
4 years
In their experiment two groups of listeners were listening to the same story of J.D.Salinger but with two different perspectives (cheating/paranoia). Respectively, we fine-tuned BERT model to fit either cheating or paranoia context using a dedicated dataset of relevant stories.
Tweet media one
Tweet media two
1
0
0
@R_Tikochinski
Refael Tikochinski
4 years
We provide a new perspective on the well established concept of fine-tuning of deep language models, as we harness it to model variations in listener’s perspective during language comprehension. We tested our methodology using an fMRI dataset collected by Yeshurun et al. (2017).
1
0
0