Seungwook Han Profile
Seungwook Han

@seungwookh

Followers
441
Following
694
Media
21
Statuses
153

phd-ing @MIT_CSAIL, prev @MITIBMLab @columbia

Joined June 2017
Don't wanna be here? Send us removal request.
@seungwookh
Seungwook Han
9 months
🧙‍♂️Excited to share our new whitepaper “General Reasoning Requires Learning to Reason from the Get-Go.” We argue that simply making models bigger and feeding them more data is NOT enough for robust, adaptable reasoning. (1/n)
1
12
79
@shannonzshen
Shannon Shen
15 days
Today's AI agents are optimized to complete tasks in one shot. But real-world tasks are iterative, with evolving goals that need collaboration with users. We introduce collaborative effort scaling to evaluate how well agents work with people—not just complete tasks 🧵
6
51
264
@kvfrans
Kevin Frans
15 days
What really matters in matrix-whitening optimizers (Shampoo/SOAP/PSGD/Muon)? We ran a careful comparison, dissecting each algorithm. Interestingly, we find that proper matrix-whitening can be seen as *two* transformations, and not all optimizers implement both. Blog:
5
48
327
@gabe_grand
Gabe Grand
19 days
Do AI agents ask good questions? We built “Collaborative Battleship” to find out—and discovered that weaker LMs + Bayesian inference can beat GPT-5 at 1% of the cost. Paper, code & demos: https://t.co/lV76HRKR3d Here's what we learned about building rational information-seeking
4
33
159
@seungwookh
Seungwook Han
23 days
gentlerobots are what we want
@gabe_mrgl
Gabe Margolis
25 days
Excited to share SoftMimic -- a new approach for learning compliant humanoid policies that interact gently with the world.
0
0
2
@phillip_isola
Phillip Isola
1 month
Over the past year, my lab has been working on fleshing out theory/applications of the Platonic Representation Hypothesis. Today I want to share two new works on this topic: Eliciting higher alignment: https://t.co/KY4fjNeCBd Unpaired rep learning: https://t.co/vJTMoyJj5J 1/9
10
120
696
@sharut_gupta
Sharut Gupta
1 month
[1/7] Paired multimodal learning shows that training with text can help vision models learn better image representations. But can unpaired data do the same? Our new work shows that the answer is yes! w/ @shobsund @ChenyuW64562111, Stefanie Jegelka and @phillip_isola
11
53
437
@seungwookh
Seungwook Han
1 month
you will learn so much and have fun working with her!
@materzynska
Joanna
1 month
I am looking for motivated students to join my team at @AIatMeta FAIR for a summer internship. If you have experience with motion modeling / diffusion models and/or social AI please feel free to reach out! 🤖✨
0
0
6
@jaschasd
Jascha Sohl-Dickstein
2 months
Title: Advice for a young investigator in the first and last days of the Anthropocene Abstract: Within just a few years, it is likely that we will create AI systems that outperform the best humans on all intellectual tasks. This will have implications for your research and
58
259
2K
@jxbz
Jeremy Bernstein
2 months
I wrote this blog post that tries to go further toward design principles for neural nets and optimizers The post presents a visual intro to optimization on normed manifolds and a Muon variant for the manifold of matrices with unit condition number https://t.co/EhhKN2Jylx
@thinkymachines
Thinking Machines
2 months
Efficient training of neural networks is difficult. Our second Connectionism post introduces Modular Manifolds, a theoretical step toward more stable and performant training by co-designing neural net optimizers with manifold constraints on weight matrices.
23
55
473
@seungwookh
Seungwook Han
2 months
Why do models forget less with RL than SFT?
@jyo_pari
Jyo Pari
2 months
For agents to improve over time, they can’t afford to forget what they’ve already mastered. We found that supervised fine-tuning forgets more than RL when training on a new task! Want to find out why? 👇
0
0
2
@jyo_pari
Jyo Pari
3 months
We have a fun collaboration of @GPU_MODE x @scaleml coming up! We’re hosting a week-long online bootcamp that explores the core components of GPT-OSS while also diving into cutting-edge research that pushes beyond what’s currently in GPT-OSS! For example, how can MoE's power
1
22
72
@seungwookh
Seungwook Han
4 months
to clarify, not saying we’re there atm. i dont have a formal definition of what it means to be human, but agency and the ability to continually learn seem to be important
0
0
0
@seungwookh
Seungwook Han
4 months
this tendency to anthropomorphize is too real and shuns me from reading work with such titles. on the other hand, a part of me also asks: how do we know when a thing is conscious and start to analyze as if it is another human-like organism. we automatically assume all humans are
@fchollet
François Chollet
4 months
Resist the tendency to anthropomorphize that which is not human
1
0
3
@seungwookh
Seungwook Han
4 months
uncertainty-aware reasoning, akin to how humans leverage our confidence
@MehulDamani2
Mehul Damani
4 months
🚨New Paper!🚨 We trained reasoning LLMs to reason about what they don't know. o1-style reasoning training improves accuracy but produces overconfident models that hallucinate more. Meet RLCR: a simple RL method that trains LLMs to reason and reflect on their uncertainty --
0
1
3
@seungwookh
Seungwook Han
4 months
was actually wondering with @hyundongleee the fundamental differences between diffusion and autoregressive modeling other than the structure imposed in the modeling of the sequential conditional distribution and how they manifest. a poignant paper that addresses this thought
@mihirp98
Mihir Prabhudesai
4 months
🚨 The era of infinite internet data is ending, So we ask: 👉 What’s the right generative modelling objective when data—not compute—is the bottleneck? TL;DR: ▶️Compute-constrained? Train Autoregressive models ▶️Data-constrained? Train Diffusion models Get ready for 🤿 1/n
0
1
13
@seungwookh
Seungwook Han
4 months
omw to trying this out đź‘€
@pika_labs
Pika
4 months
Some news: We're building the next big thing — the first-ever AI-only social video app, built on a highly expressive human video model. Over the past few weeks, we’ve been testing it in private beta. Now, we’re opening early access: download the iOS app to join the waitlist, or
0
0
0
@seungwookh
Seungwook Han
4 months
how particles can act differently under different scales and conditions and how we can equip it as part of design is cool
@MITarchitecture
MIT Architecture
10 months
8. Jeonghyun Yoon: Precisely Loose: Unraveling the Potential of Particles A big thank you goes out to the entire architecture community, including advisors, readers, staff, family and peers who helped bring these projects to light. Image credit: Chenyue “xdd” Dai 2/2
0
0
4
@LakerNewhouse
Laker Newhouse
4 months
[1/9] We created a performant Lipschitz transformer by spectrally regulating the weights—without using activation stability tricks: no layer norm, QK norm, or logit softcapping. We think this may address a “root cause” of unstable training.
14
79
585
@seungwookh
Seungwook Han
4 months
But actually this is the og way of doing it and should stop by E-2103 to see @jxbz and Laker Newhouse whiteboard the whole paper.
@jxbz
Jeremy Bernstein
4 months
Laker and I are presenting this work in an hour at ICML poster E-2103. It’s on a theoretical framework and language (modula) for optimizers that are fast (like Shampoo) and scalable (like muP). You can think of modula as Muon extended to general layer types and network topologies
1
6
75