natolambert Profile Banner
Nathan Lambert Profile
Nathan Lambert

@natolambert

Followers
62K
Following
36K
Media
1K
Statuses
10K

Research @allen_ai, reasoning, open models, RL(VR/HF)... Contact via email. Writes @interconnectsai, Wrote The RLHF Book, πŸ”οΈπŸƒβ€β™‚οΈ

Seattle
Joined December 2014
Don't wanna be here? Send us removal request.
@natolambert
Nathan Lambert
10 days
We present Olmo 3, our next family of fully open, leading language models. This family of 7B and 32B models represents: 1. The best 32B base model. 2. The best 7B Western thinking & instruct models. 3. The first 32B (or larger) fully open reasoning model. This is a big
85
356
2K
@natolambert
Nathan Lambert
10 hours
0
2
12
@natolambert
Nathan Lambert
10 hours
Me recruiting for Ai2 at NeurIPS next week.
9
6
203
@natolambert
Nathan Lambert
10 hours
Banger paper. Finally getting to read the rest of it πŸ˜… There's an award for whoever finds all the secrets first in the new arxiv version that's coming soon.
6
15
250
@kylelostat
Kyle Lo @ NeurIPS 2025
2 days
yikes 😬 it can be difficult to separate mean reviews from critical but helpful reviews recommend folks avoid seeking this info. no way to consume it rationally & could harm ur ability to form meaningful professional relationships in future
3
2
40
@natolambert
Nathan Lambert
2 days
Real deals to get my stuff, not sponsored by Nano Banana Pro. This is the last day my book is blanket 50% off, after that you need to beg me for a discount code!
3
0
13
@jacobcares
Jacob Morrison
4 days
πŸ‡ΊπŸ‡ΈπŸ‡ΊπŸ‡ΈπŸ‡ΊπŸ‡ΈπŸ‡ΊπŸ‡ΈπŸ‡ΊπŸ‡Έ
@natolambert
Nathan Lambert
7 days
Announcing Olmo...
0
1
10
@natolambert
Nathan Lambert
4 days
Week after release, Olmo 3 team still grinding to make the paper even more amazing. Folks here really care about you having a great reading experience πŸ’™πŸ¦–πŸ„
5
2
129
@natolambert
Nathan Lambert
4 days
Or, just do it because you want to support @xeophon_ and his love of AI generated otters. Otters ain't cheap folks.
2
1
10
@natolambert
Nathan Lambert
4 days
Want to learn more about open AI models? I've been covering the open model ecosystem closely on Interconnects for years and I wanted to recap how to get the most value out of what we're building. There's a lot of information out there, but it's hard to know how to read it and
4
9
49
@natolambert
Nathan Lambert
5 days
Love to see more fully open post-training recipes (this one multimodal reasoning). It's surprising how rare post-training data is because the opportunity for impact is huge. Lots of people will try it and simple data methods still can improve on SOTA.
@KaichenZhang358
Kaichen Zhang
6 days
πŸš€ Introducing OpenMMReasoner β€” a transparent, reproducible recipe for multimodal reasoning. We present a 2-stage pipeline uses 874K SFT samples with step-by-step validation and 74K high-quality RL samples. Paper: https://t.co/87o8IwI26Y More in thread:
3
21
198
@natolambert
Nathan Lambert
5 days
Evening run in Zion National Park
2
2
89
@natolambert
Nathan Lambert
5 days
Opus? Sorry, living under rocks today.
10
4
171
@natolambert
Nathan Lambert
6 days
With this latest artifacts log roundup of the best open models, I included the list of serious open model builders in the U.S. These 13 are making models way smaller than Chinese competition, and with often worse licenses. We'll be improving this for an update to the ATOM
@interconnectsai
Interconnects
6 days
Latest open artifacts (#16): Who's building models in the U.S., China's model release playbook, and a resurgence of truly open models A month with SOTA releases with (truly) open model releases left and right. https://t.co/lVhmIZBZGT
14
36
230
@natolambert
Nathan Lambert
6 days
Ai2 has claimed the Mandate of Heaven because we have the most confirmed Catholics. Get rekt bro. Now @Pontifex - GPUs plz!!!
@soldni
Luca Soldaini 🌯 NeurIPS 2025
6 days
The Olmo 3 Trinity
7
4
112
@simonw
Simon Willison
7 days
Wrote up my own notes on trying out Olmo 3 (the 32B thinking model and the 7B instruct model) via LM Studio, plus some thoughts on why transparent training data is so important
simonwillison.net
Olmo is the LLM series from Ai2β€”the Allen institute for AI. Unlike most open weight models these are notable for including the full training data, training process and checkpoints along …
@natolambert
Nathan Lambert
10 days
We present Olmo 3, our next family of fully open, leading language models. This family of 7B and 32B models represents: 1. The best 32B base model. 2. The best 7B Western thinking & instruct models. 3. The first 32B (or larger) fully open reasoning model. This is a big
16
51
563
@rajammanabrolu
Prithviraj (Raj) Ammanabrolu
7 days
PSA to more junior ppl but "too many papers" comes up as a reason to reject for both industry and academic hiring more often than you'd think
@ilkedemir
Δ°lke Demir
10 days
A colleague brought this miraculous situation to my attention. @NeurIPSConf is there an upper limit on the number of workshop papers that a single person can author? Having 60 (sixty) papers in one conference? How is this humanly possible? https://t.co/XJBvC9Exfd #neurips2025
4
12
229
@natolambert
Nathan Lambert
7 days
Announcing Olmo...
5
1
124