
nathan lile
@NathanThinks
Followers
2K
Following
28K
Media
248
Statuses
2K
ceo/cofounder @ https://t.co/bDd3J4Lmzf hiring in SF š scaling synthetic reasoning. recurrent rabbit hole victim. nothing great is easy.
San Francisco
Joined August 2013
Superintelligence isn't about discovering new things; it's about discovering new ways to discover. I think our latest work formalizes Meta Chain-of-Thought which we believe lies on the path to ASI. When we train models on the problem-solving process itselfārather than the final.
We have a new position paper on "inference time compute" and what we have been working on in the last few months! We present some theory on why it is necessary, how does it work, why we need it and what does it mean for "super" intelligence.
4
28
133
RT @tszzl: you have no idea how hard it is to get an rlhf model to be even ācentristā much less right reactionary. they must have beat thisā¦.
0
181
0
RT @sama: Iām not big on identities, but I am extremely proud to be American. This is true every day, but especially todayāI firmly believeā¦.
0
3K
0
RT @FredericLambert: Xiaomi got 200,000 orders in 3 minutes for the YU7 and Iām not even surprised. The value proposition is just nuts. Iā¦.
0
52
0
What if models could learn which problems _deserve_ deep thinking?. No labels. Just let the model discover difficulty through its own performance during training. Instead of burning compute š„šø on trivial problems, it allocates 5x more on problems that actually need it ā.
Our new method (ALP) monitors solve rates across RL rollouts and applies inverse difficulty penalties during RL training. Result? Models learn an implicit difficulty estimatorāallocating 5x more tokens to hard vs easy problems, cutting overall usage by 50%. š§µš1/10
1
6
37
RT @JessePeltan: China is winning the race to Type 1 Civilization and we're not even aware it's happening. By 2030, China will have the maā¦.
0
856
0
RT @ashVaswani: Check out our latest research on data. We're releasing 24T tokens of richly labelled web data. We found it very useful forā¦.
0
81
0
RT @JamesAlcorn94: congrats @rm_rafailov on your hard-earned acceptance to the USofA as alien of officially extraordinary ability. The alieā¦.
0
2
0
RT @rm_rafailov: When we first published our work on this 9 months ago it was rejected for being impractical in realistic cases. Six monthā¦.
0
14
0
Generative Reward Models impact compounds daily. way stronger interest now than when we published last fall š. many excellent recent extensionsācool seeing where .researchers take GenRM
we bootstrapped our way to generalized meta-reasoning capabilities with generative reward models. classical reward models can be worse than random on new reasoning tasks š². we see improvements in robustness, generalization, interpretability and an opportunity to unify RLHF/RLAIF.
1
2
19
RT @nathanfielder: I was going to call this dumb, but former NTSB board member John Goglia just texted me and told me to reply with this inā¦.
0
2K
0
RT @MetaPuppet: This is Plastic. Made with Veo3. Spoilers in the next post. Watch before reading
0
536
0
RT @NathanThinks: btw we have ongoing research on this front! we're open-science, pro-publication, and love collaboration. want to push thā¦.
0
8
0
RT @HashemGhaili: Prompt Theory (Made with Veo 3). What if AI-generated characters refused to believe they were AI-generated? https://t.co/ā¦.
0
4K
0
Platonic GANs. >Repeat after meāyour embeddings were never yours.
excited to finally share on arxiv what we've known for a while now:. All Embedding Models Learn The Same Thing. embeddings from different models are SO similar that we can map between them based on structure alone. without *any* paired data. feels like magic, but it's real:š§µ.
0
0
4
btw we have ongoing research on this front! we're open-science, pro-publication, and love collaboration. want to push this frontier forward? we're growing our SF team & always open to research partnersāreach out, my DMs are open š©.
excellent work by @jaseweston & teamāextending our "Generative Reward Models" work with RL (GRPO) to optimize LLM reasoning during judgment. scalable (synthetic) evaluation continues to be AI's key bottleneck!
17
8
56