Richard Antonello Profile
Richard Antonello

@NeuroRJ

Followers
359
Following
441
Media
10
Statuses
224

Postdoc in the Mesgarani Lab at Columbia University. Studying how the brain processes language by using LLMs. (Formerly @HuthLab at UT Austin)

Joined May 2020
Don't wanna be here? Send us removal request.
@NeuroRJ
Richard Antonello
12 days
RT @katie_kang_: LLMs excel at fitting finetuning data, but are they learning to reason or just parroting🦜?. We found a way to probe a mode….
0
122
0
@NeuroRJ
Richard Antonello
25 days
RT @GGaziv: Can we precisely and noninvasively modulate deep brain activity just by riding the natural visual feed? šŸ‘ļøšŸ§ .In our new preprint….
0
11
0
@NeuroRJ
Richard Antonello
30 days
RT @sparse_emcheng: We'll be presenting this at #ACL2025 ! Come find me and @tomjiralerspong in Vienna :).
0
2
0
@NeuroRJ
Richard Antonello
1 month
RT @yufan_zhuang: 🤯Your LLM just threw away 99.9 % of what it knows. Standard decoding samples one token at a time and discards the rest o….
0
7
0
@NeuroRJ
Richard Antonello
2 months
For those attending NAACL, today I'll be presenting recent work on how we can use language encoding models to identify functional specialization throughout cortex. Stop by my talk at 10:30 at the CMCL workshop!
Tweet media one
1
1
15
@NeuroRJ
Richard Antonello
3 months
RT @karansdalal: Today, we're releasing a new paper – One-Minute Video Generation with Test-Time Training. We add TTT layers to a pre-trai….
0
938
0
@NeuroRJ
Richard Antonello
4 months
RT @Ruimin_G: Excited to introduce funROI: A Python package for functional ROI analyses of fMRI data!. #fMRI #Neur….
0
20
0
@NeuroRJ
Richard Antonello
4 months
RT @mariannearr: 🚨Announcing our #ICLR2025 Oral!. šŸ”„Diffusion LMs are on the rise for parallel text generation! But unlike autoregressive LM….
0
133
0
@NeuroRJ
Richard Antonello
4 months
RT @DanielCohenOr1: Vectorization into a neat SVG!šŸŽØāœØ .Instead of generating a messy SVG (left), we produce a structured, compact representa….
0
126
0
@NeuroRJ
Richard Antonello
4 months
RT @didiforx: Reasoning models lack atomic thought āš›ļø. Unlike humans using independent units, they store full historiesšŸ¤”. Introducing Atom….
0
417
0
@NeuroRJ
Richard Antonello
5 months
RT @GeelingC: šŸŽ‰Excited to share: My first ML conference paper, Population Transformer 🧠, is an Oral at #ICLR2025! This work has truly evolv….
0
17
0
@NeuroRJ
Richard Antonello
6 months
RT @GretaTuckute: Our @CogCompNeuro GAC paper is out! We focus on two main questions: . 1⃣ How should we use neuroscientific data in model….
0
9
0
@NeuroRJ
Richard Antonello
6 months
RT @Alxmrphi: EEG Decoding with Multi-Timescale Language Models. Our paper was recently published in Computational Linguistics. Tweetprint….
0
1
0
@NeuroRJ
Richard Antonello
6 months
RT @AIatMeta: New research from Meta FAIR — Meta Memory Layers at Scale. This work takes memory layers beyond proof-of-concept, proving the….
0
178
0
@NeuroRJ
Richard Antonello
7 months
RT @TomerUllman: "The Illusion Illusion" . vision language models recognize images of illusions. but they also say non-illusions are illu….
0
16
0
@NeuroRJ
Richard Antonello
7 months
RT @GeelingC: Catch me and @czlwang presenting our poster today (12/14) 3:30-5:30pm at the #NeurIPS2024 NeuroAI Workshop! 🧠 .
0
1
0
@NeuroRJ
Richard Antonello
7 months
RT @poolio: How to upset the (few remaining) neuroscientists at NeurIPS 101
Tweet media one
0
140
0
@NeuroRJ
Richard Antonello
7 months
RT @patrickmineault: New post! What do brain scores teach us about brains? Does accounting for variance in the brain mean that an ANN is br….
0
67
0
@NeuroRJ
Richard Antonello
7 months
RT @unireps: The UniReps Workshop is happening THIS SATURDAY at #NeurIPS! šŸ¤–šŸ§ . Join us for a day of insightful talks and engaging discussion….
0
14
0
@NeuroRJ
Richard Antonello
7 months
Come by our poster (#3801), exploring how we can use the question-answering abilities of LLMs to build more #interpretable models of language processing in the 🧠, starting in one hour at #NeurIPS !.
@csinva
Chandan Singh
1 year
LLM embeddings are opaque, hurting them in contexts where we really want to understand what’s in them (like neuroscience). Our new work asks whether we can craft *interpretable embeddings* just by asking yes/no questions to black-box LLMs. 🧵
Tweet media one
0
1
9