
Alexander Huth
@alex_ander
Followers
5K
Following
19K
Media
193
Statuses
3K
Interested in how & what the brain computes. Associate professor of CS & Neuro @UTAustin. Married to the incredible @Libertysays. he/him
Austin, TX
Joined April 2008
I'm not at neurips this year, but @csinva @NeuroRJ & co are presenting work on how you can just ask LLMs questions about pieces of text, and then use the answers as embeddings. This works well and is very interpretable! Poster #3801 from 11-2 today
LLM embeddings are opaque, hurting them in contexts where we really want to understand what’s in them (like neuroscience). Our new work asks whether we can craft *interpretable embeddings* just by asking yes/no questions to black-box LLMs. 🧵
0
1
18
RT @Tknapen: In this TINS review we (w/@cvnlab, @elimerriam, & @eline_kupers) argue that *intensive* (many hours of data) fMRI of single in….
0
20
0
RT @mtoneva1: Brain activity is useful for improving language models!. We show that even small amounts of brain activity recorded while peo….
0
7
0
RT @MathisPink: 1/n🤖🧠 New paper alert!📢 In "Assessing Episodic Memory in LLMs with Sequence Order Recall Tasks" ( w….
0
17
0
RT @jenellefeather: This nicely summarizes some misconceptions in the recent position paper about measures of brain-model comparison. One….
0
8
0
RT @aran_nayebi: 1/6 I usually don’t comment on these things, but @RylanSchaeffer et al.'s paper contains enough misconceptions that I thou….
0
30
0
RT @KordingLab: I designed a webapp to tinker with the hyperparameters of UMAP as I will participate in a podcast on it today. It is super….
0
12
0
But I also want to point out that a good predictive model is a good predictive model. If you focus on predictions rather than interpreting mechanisms, you still get a lot of utility! e.g. this recent work from our group
Science faces an explainability crisis: ML models can predict many natural phenomena but can't explain them. We tackle this issue in language neuroscience by using LLMs to generate *and validate* explanations with targeted follow-up experiments 1/2
1
0
18
This echoes @o_guest & @andrea_e_martin's recent paper ( and does a good job at explaining the findings that @NeuroRJ and I wrote about in
1
2
18
Nice quick read with an important point: even if a model predicts brain data well it doesn't mean the model uses the same mechanism the brain does. More expressive models generally do better than less expressive models regardless of mechanism.
My 2nd to last #neuroscience paper will appear @unireps !!. 🧠🧠 Maximizing Neural Regression Scores May Not Identify Good Models of the Brain 🧠🧠. w/ @KhonaMikail @neurostrow @BrandoHablando @sanmikoyejo . Answering a puzzle 2 years in the making. 1/12.
2
26
150
RT @RylanSchaeffer: My 2nd to last #neuroscience paper will appear @unireps !!. 🧠🧠 Maximizing Neural Regression Scores May Not Identify Goo….
0
63
0
RT @BrainLifeio: 🧠 Stop by poster #z11 and learn more about work by PhD student Suna Guo in collaboration with @furranko and @alex_ander .@….
0
3
0
RT @gab709_1: 🚨 Thrilled to announce that our paper "Language models and brains align due to more than next-word prediction and word-level….
0
7
0