noahdgoodman Profile
noahdgoodman

@noahdgoodman

Followers
5K
Following
365
Media
8
Statuses
260

Professor of natural and artificial intelligence @Stanford. Alignment at @GoogleDeepMind. (@StanfordNLP @StanfordAILab etc)

Joined November 2019
Don't wanna be here? Send us removal request.
@noahdgoodman
noahdgoodman
1 month
So proud! . Go work with Gabriel, he’ll be the best advisor.
@GabrielPoesia
Gabriel Poesia
1 month
Thrilled to join the UMich faculty in 2026!. I'll also be recruiting PhD students this upcoming cycle. If you're interested in AI and formal reasoning, consider applying!.
0
5
30
@noahdgoodman
noahdgoodman
2 months
RT @sydneymlevine: 🔥 New position piece! 🔥 In this paper we lay out our vision for AI Alignment as guided by "Resource Rational Contractual….
0
21
0
@noahdgoodman
noahdgoodman
2 months
It’s like chain-of-thought for humans!.
@danielwurgaft
Daniel Wurgaft
2 months
Can we record and study human chains of thought? .The think-aloud method, where participants voice their thoughts as they solve a task, offers a way! In our #CogSci2025 paper co-led with Ben Prystawski, we introduce a method to automate analysis of human reasoning traces! (1/8)🧵
0
4
13
@noahdgoodman
noahdgoodman
2 months
It turns out that a lot of the most interesting behavior of LLMs can be explained without knowing anything about architecture or learning algorithms. Here we predict the rise (and fall) of in-context learning using hierarchical Bayesian methods.
@EkdeepL
Ekdeep Singh
2 months
🚨New paper! We know models learn distinct in-context learning strategies, but *why*? Why generalize instead of memorize to lower loss? And why is generalization transient?. Our work explains this & *predicts Transformer behavior throughout training* without its weights! 🧵. 1/
4
20
111
@noahdgoodman
noahdgoodman
2 months
congrats dr poesía!.
@chrmanning
Christopher Manning
2 months
Congratulations to @GabrielPoesia on receiving his @Stanford PhD today!
Tweet media one
0
0
11
@noahdgoodman
noahdgoodman
3 months
RT @LawZero_: Every frontier AI system should be grounded in a core commitment: to protect human joy and endeavour. Today, we launch @LawZe….
0
86
0
@noahdgoodman
noahdgoodman
5 months
RT @rohinmshah: Just released GDM’s 100+ page approach to AGI safety & security! (Don’t worry, there’s a 10 page summary.). AGI will be tra….
0
71
0
@noahdgoodman
noahdgoodman
5 months
RT @ma_tay_: 🤔🤖Most AI systems assume there’s just one right answer—but many tasks have reasonable disagreement. How can we better model hu….
0
37
0
@noahdgoodman
noahdgoodman
6 months
RT @mcxfrank: AI models are fascinating, impressive, and sometimes problematic. But what can they tell us about the human mind?. In a new r….
0
41
0
@noahdgoodman
noahdgoodman
6 months
“Four habits of highly effective STaRs” — we show that certain high level cognitive behaviors are necessary for learning to reason through RL. Exciting!.
@gandhikanishk
Kanishk Gandhi
6 months
New Paper!! We try to understand why some LMs self-improve their reasoning while others hit a wall. The key? Cognitive behaviors! Read our paper on how the right cognitive behaviors can make all the difference in a model's ability to improve with RL! 🧵1/13
0
5
41
@noahdgoodman
noahdgoodman
6 months
This may be a deep failure of understanding or a shallow result of current alignment goals. But it is striking to see in a world where language models *get everything right".
1
0
7
@noahdgoodman
noahdgoodman
6 months
In new work we compared explored behavior of large language models on the same task. We found models were very poor at understanding hyperbole, and even reversed patterns in human data for halo effects.
1
0
5
@noahdgoodman
noahdgoodman
6 months
Nonliteral understanding of number words | PNAS
1
0
4
@noahdgoodman
noahdgoodman
6 months
In 2014 Kao, Wu, Bergen, and I studied number word interpretation, such as:."That latte cost a million dollars" vs "That latte cost three dollars". (hyperbole)."I'll meet you at 11:57a" vs "I'll meet you at noon". (halo).
1
0
6
@noahdgoodman
noahdgoodman
6 months
A note on hyperbole, halo, and language models. No not about startup valuations!.
1
3
12
@noahdgoodman
noahdgoodman
6 months
RT @elicitorg: We raised a $22M Series A and are launching Elicit Reports, a better version of Deep Research for actual researchers. Elici….
0
92
0
@noahdgoodman
noahdgoodman
7 months
Congrats to OAI on producing a reasoning model! Their opaque tweets demonstrate that they’ve (independently) found some of the core ideas that we did on our way to STaR.
@markchen90
Mark Chen
7 months
Congrats to DeepSeek on producing an o1-level reasoning model! Their research paper demonstrates that they’ve independently found some of the core ideas that we did on our way to o1.
29
129
2K
@noahdgoodman
noahdgoodman
8 months
RT @jphilippfranken: Presenting this tomorrow at @NeurIPSConf East Exhibit Hall A-C #2111 (4:30 p.m. PST — 7:30 p.m. PST). Come along if yo….
0
5
0
@noahdgoodman
noahdgoodman
8 months
RT @GabrielPoesia: If you're at NeurIPS, come tomorrow for the Oral+Poster on "Learning Formal Mathematics from Intrinsic Motivation"! Real….
0
19
0