Andrea de Varda Profile
Andrea de Varda

@devarda_a

Followers
364
Following
845
Media
24
Statuses
113

Postdoc at MIT BCS, interested in language(s) in humans and LMs

Joined March 2022
Don't wanna be here? Send us removal request.
@devarda_a
Andrea de Varda
3 days
New preprint! 🤖🧠.The cost of thinking is similar between large reasoning models and humans.👉 w/ Ferdinando D'Elia, @AndrewLampinen, and @ev_fedorenko (1/6).
osf.io
Do neural network models capture the cognitive demands of human reasoning? Across four reasoning domains, we show that the length of the chain-of-thought generated by a large reasoning model predicts...
4
22
78
@devarda_a
Andrea de Varda
3 days
Why does this alignment emerge? There are similarities in how reasoning models and humans learn: first by observing worked examples (pretraining), then by practicing with feedback (RL). In the end, just like humans, they allocate more effort to harder problems (7/7).
1
1
5
@devarda_a
Andrea de Varda
3 days
Token count also captures differences across tasks. Avg. token count predicts avg. RT across domains (r = 0.98, left), and even item-level RTs across all tasks (r = 0.95 (!!), right). A single effort metric scales with human processing cost across domains. (6/7)
Tweet media one
1
1
5
@devarda_a
Andrea de Varda
3 days
And: this alignment is stronger for R1 than for its base model (DeepSeek-V3), which isn't trained to reason step-by-step. Training the model to reason improves its match to human processing cost. (5/7)
Tweet media one
1
1
3
@devarda_a
Andrea de Varda
3 days
We found that the number of reasoning tokens generated by the model reliably correlates with human RTs within each task (mean r = 0.62, all ps < .001). (4/7)
Tweet media one
1
1
6
@devarda_a
Andrea de Varda
3 days
One key aspect is processing cost, reflected in reaction times: how long does it take to solve a problem?.We compared human RTs to the length of DeepSeek-R1's reasoning traces across five domains: arithmetic (numeric & verbal), logic, relational reasoning, and the ARC task (3/7)
Tweet media one
1
0
3
@devarda_a
Andrea de Varda
3 days
A central goal in CogSci is to develop models that explain human reasoning across diverse tasks, from arithmetic to logic to relational inference. Large reasoning models can now solve many of these problems. But does their reasoning process reflect how humans think? (2/6).
1
0
3
@devarda_a
Andrea de Varda
2 months
RT @byungdoh: Have reading time corpora been leaked into LM pre-training corpora? Should you be cautious about using pre-trained LM surpris….
0
6
0
@devarda_a
Andrea de Varda
2 months
RT @whylikethis_: 👀📖Big news! 📖👀.Happy to announce the release OneStop Eye Movements!🍾🍾.The OneStop dataset is the product of over 6 years….
Tweet card summary image
github.com
OneStop: A 360-Participant Eye Tracking Dataset with Different Reading Regimes - lacclab/OneStop-Eye-Movements
0
11
0
@devarda_a
Andrea de Varda
2 months
RT @GretaTuckute: What are the organizing dimensions of language processing?. We show that voxel responses are organized along 2 main axes:….
0
42
0
@devarda_a
Andrea de Varda
4 months
RT @JumeletJ: ✨New paper ✨ .Introducing 🌍MultiBLiMP 1.0: A Massively Multilingual Benchmark of Minimal Pairs for Subject-Verb Agreement, c….
0
19
0
@devarda_a
Andrea de Varda
5 months
RT @bkhmsi: 🚨 New Preprint!!. LLMs trained on next-word prediction (NWP) show high alignment with brain recordings. But what drives this al….
0
64
0
@devarda_a
Andrea de Varda
5 months
@PetilliMarco1 We show that spatial organization plays a role in conceptual representations (an aspect often overlooked in computational models of meaning). Understanding where objects appear together matters for how we think about them. (8/8).
0
0
0
@devarda_a
Andrea de Varda
5 months
@PetilliMarco1 SemanticScape shows partial isomorphism with text- and CNN-based representations:.✅ Text - Expected: language reflects real-world structure. ✅ CNN - Unexpected: SemanticScape is only based on positions. Spatial structure reflects perceptual and linguistic relationships. (7/8).
1
0
0
@devarda_a
Andrea de Varda
5 months
@PetilliMarco1 SemanticScape representations predict:. ✔️ Semantic similarity judgements (thematic & taxonomic).✔️ Visual similarity judgments.✔️ Semantic priming latencies.✔️ Analogical relations.❌ Responses to implicit perceptual tasks (6/8).
1
0
0
@devarda_a
Andrea de Varda
5 months
@PetilliMarco1 📊 How it works:. 1️⃣ Extract object positions from images.2️⃣ Compute pairwise distances between objects.3️⃣ Use dimensionality reduction (SVD) to abstract relational structure (5/8)
Tweet media one
1
0
0
@devarda_a
Andrea de Varda
5 months
@PetilliMarco1 We propose SemanticScape, a model of concepts grounded in the spatial relationships between objects in real-world images. It encodes how objects are positioned relative to each other, capturing statistical regularities in visual scenes. (4/8).
1
0
1
@devarda_a
Andrea de Varda
5 months
@PetilliMarco1 Objects in visual scenes are not randomly placed: they obey physical and functional constraints. A cup is near a saucer, a book is on a shelf—objects are positioned in structured environments. (3/8).
1
0
0
@devarda_a
Andrea de Varda
5 months
@PetilliMarco1 Traditional distributional semantic models capture meaning from word co-occurrences but lack grounding in the visual world. Computer vision models like CNNs capture visual features but struggle with object relationships. Can we bridge the gap? (2/8).
1
0
0
@devarda_a
Andrea de Varda
5 months
New paper out in JML!. We built a distributional model that learns concept representations from how objects are organized in the visual environment. W/ @PetilliMarco1 and Marco Marelli. (1/8).
1
3
27