Andrea de Varda Profile
Andrea de Varda

@devarda_a

Followers
414
Following
986
Media
24
Statuses
136

Postdoc at MIT BCS, interested in language(s) and thought in humans and LMs

Joined March 2022
Don't wanna be here? Send us removal request.
@neuranna
Anna Ivanova
11 days
The last chapter of my PhD (expanded) is finally out as a preprint! “Semantic reasoning takes place largely outside the language network” 🧠🧐 https://t.co/Z7cgHsvIbu What is semantic reasoning? Read on! 🧵👇
Tweet card summary image
biorxiv.org
The brain’s language network is often implicated in the representation and manipulation of abstract semantic knowledge. However, this view is inconsistent with a large body of evidence suggesting...
10
35
156
@ev_fedorenko
Ev (like in 'evidence', not Eve) Fedorenko 🇺🇦
13 days
Finally out in @PNASNews: https://t.co/ahDbYpAOA7 (Three distinct components of pragmatic language use: Social conventions, intonation, and world knowledge–based causal reasoning), with many new analyses (grateful for a thoughtful and constructive review process at PNAS!)
Tweet card summary image
pnas.org
Successful communication requires frequent inferences. Such inferences span a multitude of phenomena: from understanding metaphors, to detecting ir...
@ev_fedorenko
Ev (like in 'evidence', not Eve) Fedorenko 🇺🇦
2 years
Thrilled to share this tour de force co-led by SammyFloyd+@OlessiaJour! 8yrs in the making! "A tripartite structure of pragmatic language abilities: comprehension of social conventions,intonation processing,and causal reasoning". W/@ZachMineroff; co-supervised w/@LanguageMIT 1/n
1
15
60
@devarda_a
Andrea de Varda
13 days
Really nice demonstration (led by @DKryvosheieva and @GretaTuckute) that agreement phenomena seem to carve out a shared subspace in LLMs: very different agreement types rely on overlapping units, also across languages!
@GretaTuckute
Greta Tuckute
13 days
How do LLMs process syntax? Do different syntactic phenomena recruit the same model units, or do they recruit distinct model components? And do different languages rely on similar units to process the same syntactic phenomenon? Check out our new preprint (to appear at ACL 2026)!
0
2
16
@_coltoncasto
Colton Casto
26 days
What does it mean to understand language? We argue that the brain’s core language system is limited, and that *deeply* understanding language requires EXPORTING information to other brain regions. w/ @neuranna @ev_fedorenko @Nancy_Kanwisher https://t.co/6vvRGpkgE6 1/n🧵👇
Tweet card summary image
arxiv.org
Language understanding entails not just extracting the surface-level meaning of the linguistic input, but constructing rich mental models of the situation it describes. Here we propose that...
6
39
116
@devarda_a
Andrea de Varda
1 month
Computational psycho/neurolinguistics is lots of fun, but most studies only focus on English. If you think cross-linguistic evidence matters for understanding the language system, consider submitting an abstract to MMMM 2026!
1
0
2
@devarda_a
Andrea de Varda
1 month
Check out the revised version: it adds new reasoning tasks and a replication with six models. Link:
Tweet card summary image
pnas.org
Do neural network models capture the cognitive demands of human reasoning? Across seven reasoning tasks, we show that the length of the chain-of-th...
1
0
4
@devarda_a
Andrea de Varda
1 month
"The cost of thinking is similar between large reasoning models and humans" is now out in PNAS! The number of tokens produced by reasoning models predicts human RTs across 7 reasoning tasks, including math, logic, relational, and even intuitive (social and physical) reasoning.
@devarda_a
Andrea de Varda
5 months
New preprint! 🤖🧠 The cost of thinking is similar between large reasoning models and humans 👉 https://t.co/0G6ay4NQc5 w/ Ferdinando D'Elia, @AndrewLampinen, and @ev_fedorenko (1/6)
1
4
26
@thomashikaru
Thomas Hikaru Clark
3 months
What makes some sentences more memorable than others? Our new paper gathers memorability norms for 2500 sentences using a recognition paradigm, building on past work in visual and word memorability. @GretaTuckute @bj_mdn @ev_fedorenko
2
8
22
@kanishkamisra
Kanishka Misra 🌊
3 months
The compling group at UT Austin ( https://t.co/qBWIqHQmFG) is looking for PhD students! Come join me, @kmahowald, and @jessyjli as we tackle interesting research questions at the intersection of ling, cogsci, and ai! Some topics I am particularly interested in:
2
33
118
@neuranna
Anna Ivanova
3 months
As our lab started to build encoding 🧠 models, we were trying to figure out best practices in the field. So @NeuroTaha built a library to easily compare design choices & model features across datasets! We hope it will be useful to the community & plan to keep expanding it! 1/
@NeuroTaha
Taha Binhuraib 🦉
3 months
🚨 Paper alert: To appear in the DBM Neurips Workshop LITcoder: A General-Purpose Library for Building and Comparing Encoding Models 📄 arxiv: https://t.co/jXoYcIkpsC 🔗 project: https://t.co/UHtzfGGriY
1
7
39
@whylikethis_
Yevgeni Berzak
3 months
Check out our* new preprint on decoding open-ended information seeking goals from eye movements! *Proud to say that my main contribution to this work are the banger model names: DalEye Llama and DalEye LLaVa! https://t.co/XNj2adjabc
0
3
8
@isabelpapad
Isabel Papadimitriou @ NeurIPS ☀️🌊
3 months
Are there conceptual directions in VLMs that transcend modality? Check out our COLM spotlight🔦 paper! We analyze how linear concepts interact with multimodality in VLM embeddings using SAEs with @Huangyu58589918, @napoolar, @ShamKakade6 and Stephanie Gil https://t.co/4d9yDIeePd
10
88
511
@LamarraTommaso
Tommaso Lamarra
4 months
Road to Bordeaux ✈️🇫🇷 for #SLE2025! Where I am going to present IconicITA!! An @Abstraction_ERC @Unibo and @unimib joint project on iconicity ratings ☑️ for the Italian language across L1 and L2 speakers! @m.bolognesi @BeatriceGi @a.ravelli @devarda_a @chiarasaponaro8
0
2
5
@RTomMcCoy
Tom McCoy
4 months
🤖🧠 NEW PAPER ON COGSCI & AI 🧠🤖 Recent neural networks capture properties long thought to require symbols: compositionality, productivity, rapid learning So what role should symbols play in theories of the mind? For our answer...read on! Paper: https://t.co/VsCLpsiFuU 1/n
4
35
170
@GretaTuckute
Greta Tuckute
4 months
0
14
78
@MoshePoliak
Moshe Poliak
5 months
(1)💡NEW PUBLICATION💡 Word and construction probabilities explain the acceptability of certain long-distance dependency structures Work with Curtis Chen and @LanguageMIT Link to paper: https://t.co/m9eYj5uAwF In memory of Curtis Chen.
1
8
22