Seungwook Han
@seungwookh
Followers
441
Following
694
Media
21
Statuses
153
phd-ing @MIT_CSAIL, prev @MITIBMLab @columbia
Joined June 2017
🧙‍♂️Excited to share our new whitepaper “General Reasoning Requires Learning to Reason from the Get-Go.” We argue that simply making models bigger and feeding them more data is NOT enough for robust, adaptable reasoning. (1/n)
1
12
79
Today's AI agents are optimized to complete tasks in one shot. But real-world tasks are iterative, with evolving goals that need collaboration with users. We introduce collaborative effort scaling to evaluate how well agents work with people—not just complete tasks 🧵
6
51
264
What really matters in matrix-whitening optimizers (Shampoo/SOAP/PSGD/Muon)? We ran a careful comparison, dissecting each algorithm. Interestingly, we find that proper matrix-whitening can be seen as *two* transformations, and not all optimizers implement both. Blog:
5
48
327
Do AI agents ask good questions? We built “Collaborative Battleship” to find out—and discovered that weaker LMs + Bayesian inference can beat GPT-5 at 1% of the cost. Paper, code & demos: https://t.co/lV76HRKR3d Here's what we learned about building rational information-seeking
4
33
159
Over the past year, my lab has been working on fleshing out theory/applications of the Platonic Representation Hypothesis. Today I want to share two new works on this topic: Eliciting higher alignment: https://t.co/KY4fjNeCBd Unpaired rep learning: https://t.co/vJTMoyJj5J 1/9
10
120
696
[1/7] Paired multimodal learning shows that training with text can help vision models learn better image representations. But can unpaired data do the same? Our new work shows that the answer is yes! w/ @shobsund @ChenyuW64562111, Stefanie Jegelka and @phillip_isola
11
53
437
you will learn so much and have fun working with her!
I am looking for motivated students to join my team at @AIatMeta FAIR for a summer internship. If you have experience with motion modeling / diffusion models and/or social AI please feel free to reach out! 🤖✨
0
0
6
Title: Advice for a young investigator in the first and last days of the Anthropocene Abstract: Within just a few years, it is likely that we will create AI systems that outperform the best humans on all intellectual tasks. This will have implications for your research and
58
259
2K
I wrote this blog post that tries to go further toward design principles for neural nets and optimizers The post presents a visual intro to optimization on normed manifolds and a Muon variant for the manifold of matrices with unit condition number https://t.co/EhhKN2Jylx
Efficient training of neural networks is difficult. Our second Connectionism post introduces Modular Manifolds, a theoretical step toward more stable and performant training by co-designing neural net optimizers with manifold constraints on weight matrices.
23
55
473
to clarify, not saying we’re there atm. i dont have a formal definition of what it means to be human, but agency and the ability to continually learn seem to be important
0
0
0
this tendency to anthropomorphize is too real and shuns me from reading work with such titles. on the other hand, a part of me also asks: how do we know when a thing is conscious and start to analyze as if it is another human-like organism. we automatically assume all humans are
1
0
3
uncertainty-aware reasoning, akin to how humans leverage our confidence
🚨New Paper!🚨 We trained reasoning LLMs to reason about what they don't know. o1-style reasoning training improves accuracy but produces overconfident models that hallucinate more. Meet RLCR: a simple RL method that trains LLMs to reason and reflect on their uncertainty --
0
1
3
was actually wondering with @hyundongleee the fundamental differences between diffusion and autoregressive modeling other than the structure imposed in the modeling of the sequential conditional distribution and how they manifest. a poignant paper that addresses this thought
🚨 The era of infinite internet data is ending, So we ask: 👉 What’s the right generative modelling objective when data—not compute—is the bottleneck? TL;DR: ▶️Compute-constrained? Train Autoregressive models ▶️Data-constrained? Train Diffusion models Get ready for 🤿 1/n
0
1
13
omw to trying this out đź‘€
Some news: We're building the next big thing — the first-ever AI-only social video app, built on a highly expressive human video model. Over the past few weeks, we’ve been testing it in private beta. Now, we’re opening early access: download the iOS app to join the waitlist, or
0
0
0
how particles can act differently under different scales and conditions and how we can equip it as part of design is cool
8. Jeonghyun Yoon: Precisely Loose: Unraveling the Potential of Particles A big thank you goes out to the entire architecture community, including advisors, readers, staff, family and peers who helped bring these projects to light. Image credit: Chenyue “xdd” Dai 2/2
0
0
4
[1/9] We created a performant Lipschitz transformer by spectrally regulating the weights—without using activation stability tricks: no layer norm, QK norm, or logit softcapping. We think this may address a “root cause” of unstable training.
14
79
585
But actually this is the og way of doing it and should stop by E-2103 to see @jxbz and Laker Newhouse whiteboard the whole paper.
Laker and I are presenting this work in an hour at ICML poster E-2103. It’s on a theoretical framework and language (modula) for optimizers that are fast (like Shampoo) and scalable (like muP). You can think of modula as Muon extended to general layer types and network topologies
1
6
75