
Griffiths Computational Cognitive Science Lab
@cocosci_lab
Followers
5K
Following
95
Media
9
Statuses
178
Tom Griffiths' Computational Cognitive Science Lab. Studying the computational problems human minds have to solve.
Princeton, NJ
Joined August 2020
(1/5) Very excited to announce the publication of Bayesian Models of Cognition: Reverse Engineering the Mind. More than a decade in the making, it's a big (600+ pages) beautiful book covering both the basics and recent work:
22
464
2K
RT @theryanliu: A short 📹 explainer video on how LLMs can overthink in humanlike ways 😲!. had a blast presenting this at #icml2025 🥳 https:….
0
12
0
RT @LanceYing42: A hallmark of human intelligence is the capacity for rapid adaptation, solving new problems quickly under novel and unfami….
0
107
0
RT @kaiqu_liang: 🤔 Feel like your AI is bullshitting you? It’s not just you. 🚨 We quantified machine bullshit 💩. Turns out, aligning LLMs….
0
115
0
RT @Ikuperwajs: New review on computational approaches to studying human planning out now in @TrendsCognSci! Really enjoyed having the oppo….
0
31
0
RT @JQ_Zhu: Our paper is out today at @NatureHumBehav. We used machine learning to uncover what makes economic games complex for people. ht….
nature.com
Nature Human Behaviour - Zhu et al. use machine learning to reveal complex insights into human strategic decision-making.
0
13
0
Video games are a powerful tool for assessing the inductive biases of AI systems, as they are engineered based on how humans perceive the world and pursue their goals. This new benchmark evaluates the ability of vision language models using some challenging classic video games.
Can GPT, Claude, and Gemini play video games like Zelda, Civ, and Doom II?. 𝗩𝗶𝗱𝗲𝗼𝗚𝗮𝗺𝗲𝗕𝗲𝗻𝗰𝗵 evaluates VLMs on Game Boy & MS-DOS games given only raw screen input, just like how a human would play. The best model (Gemini) completes just 0.48% of the benchmark!. 🧵👇
0
4
24
In this new preprint we use methods from cognitive science to explore how well large language models make inferences from observations and construct interventions for understanding complex black-box systems that are analogous to those that scientists seek to understand.
Using LLMs to build AI scientists is all the rage now (e.g., Google’s AI co-scientist [1] and Sakana’s Fully Automated Scientist [2]), but how much do we understand about their core scientific abilities?.We know how LLMs can be vastly useful (solving complex math problems) yet
0
7
58
New preprint shows that training large language models to produce better chains of thought for predicting human decisions also results in them producing better psychological explanations.
1/14 Can we build an AI that thinks like psychologists or economists? 🤔Our new preprint shows how reinforcement learning (RL) can train LLMs to explain human decisions—not just predict them! That is, we're pushing LLMs beyond mere prediction into explainable cognitive models.
0
2
28
This paper uses metalearning to distill a Bayesian prior into a set of initial weights for a neural network, providing a way to create networks with interpretable soft inductive biases. The resulting networks can learn just as quickly as a Bayesian model when applied to new data.
🤖🧠Paper out in Nature Communications! 🧠🤖. Bayesian models can learn rapidly. Neural networks can handle messy, naturalistic data. How can we combine these strengths?. Our answer: Use meta-learning to distill Bayesian priors into a neural network!. 1/n
0
19
123
RT @alex_y_ku: (1/11) Evolutionary biology offers powerful lens into Transformers learning dynamics! Two learning modes in Transformers (in….
0
25
0
RT @harootonian: 🚨 New preprint alert! 🚨. Thrilled to share new research on teaching! .Work supervised by @cocosci_lab, @yael_niv, and @mar….
0
4
0
New preprint! In-context and in-weights learning are two interacting forms of plasticity, like genetic evolution and phenotypic plasticity. We use ideas from evolutionary biology to predict when neural networks will use each kind of learning.
(1/11) Evolutionary biology offers powerful lens into Transformers learning dynamics! Two learning modes in Transformers (in-weights & in-context) mirror adaptive strategies in evolution. Crucially, environmental predictability shapes both systems similarly.
0
10
75
RT @gianlucabencomo: Every ChatGPT query costs more energy than the entire life of a fruit fly.
0
3
0
RT @VminVsky: New paper: Language models have “universal” concept representation – but can they capture cultural nuance? 🌏. If someone from….
0
34
0
We are looking for a new lab manager, shared with the Concepts and Cognition Lab of @TaniaLombrozo. Apply here:
1
22
27
RT @MaxDavidGupta1: Happy to share my first first-authored work at @cocosci_lab. Determining sameness or difference between objects is utte….
0
15
0
RT @Lance_Ying42: Many studies suggest AI has achieved human-like performance on various cognitive tasks. But what is “human-like” performa….
0
46
0
RT @gianlucabencomo: New pre-print! In this work, we explore the extent to which different inductive biases can be instantiated among dispa….
arxiv.org
Artificial neural networks can acquire many aspects of human knowledge from data, making them promising as models of human learning. But what those networks can learn depends upon their inductive...
0
22
0
New preprint reveals that large language models blend two distinct representations of numbers -- as strings and as integers -- which can lead to some surprising errors. This work shows how methods from cognitive science can be useful for understanding AI systems.
1/n LLMs learn to represent numbers by predicting tokens in text. This poses a challenge: depending on context, the same set of digits can be treated as a number or a string. Given this duality, we ask what is a number in the eyes of an LLM? Is it a string or an integer? Or both?
1
6
65
RT @baixuechunzi: Excited to share that our paper is now out in @PNASNews! 🎉. Check it out: Code and data: https://….
0
52
0