New paper with @NeuroRJ, @csinva, @SunaGuo, Gavin Mischler, Jianfeng Gao, & @NimaMesgarani: We use LLMs to generate VERY interpretable embeddings where each dimension corresponds to a scientific theory, & then use these embeddings to predict fMRI and ECoG. It WORKS!
Evaluating scientific theories as predictive models in language neuroscience https://t.co/Ao2f3WNXNN
#biorxiv_neursci
3
18
84
Replies
And it works REALLY well! Prediction performance for encoding models is on a par with uninterpretable Llama3 embeddings! Even with just 35 dimensions!!! I find this fairly wild.
1
0
3
But the wilder thing is how we get the embeddings: by just asking LLMs questions. Each theory is cast as a yes/no question. We then have GPT-4 answer each question about each 10-gram in our natural language dataset. We did this for ~600 theories/questions.
2
0
0
The fact that each dimension in the embedding thus corresponds to a specific question means that the encoding model weights are interpretable right out-of-the-box. "Does the input describe a visual experience?" has high weight all along the boundary of visual cortex, for example.
1
0
1
"Does the input include dialogue?" (27) has high weights in a smattering of small regions in temporal cortex. And "Does the input contain a negation?" (35) has high weights in anterior temporal lobe and a few prefrontal areas. I think there's a lot of drilling-down we can do here
1
0
1
This method lets us quantitatively assess how much variance different theories explain about brain responses to natural language. So to figure out how well this aligns with what scientists think, we polled experts to see which questions/theories they thought would be important.
1
0
0
The model and experts were well-aligned, but there were some surprises, like "Does the input include technical or specialized terminology?" (32), which was much more important than expected.
1
0
1
To validate the maps we get from this model we also compared them to expectations derived from NeuroSynth and results from experiments targeting specific semantic categories, and also looked at inter-subject reliability. All quite successful.
1
0
1
Finally, we tested whether the same interpretable embeddings could also be used to model ECoG data from Nima Mesgarani's lab. Despite the fact that our features are less well-localized in time than LLM embeddings, this still works quite well!
2
1
1
Cortical weight maps were also reasonably correlated between ECoG and fMRI data, at least for the dimensions well-captured in the ECoG coverage.
1
0
1
I'm posting this thread to highlight some things I thought cool, but if you're interested you should also check out what @NeuroRJ wrote:
In our new paper, we explore how we can build encoding models that are both powerful and understandable. Our model uses an LLM to answer 35 questions about a sentence's content. The answers linearly contribute to our prediction of how the brain will respond to that sentence. 1/6
1
0
1
or @csinva:
New paper: Ask 35 simple questions about sentences in a story and use the answers to predict brain responses. Interpretable. Compact. Surprisingly high performance in both fMRI and ECoG. https://t.co/UbknakyP0w
1
0
0