jason_lee328 Profile Banner
Jason Lee Profile
Jason Lee

@jason_lee328

Followers
327
Following
180
Media
18
Statuses
82

Student Researcher @allen_ai Robotics and PRIOR team Robot Learning Masters @RAIVNLab @uwcse w/ R. Krishna, D. Fox

Seattle, WA
Joined February 2013
Don't wanna be here? Send us removal request.
@jason_lee328
Jason Lee
3 months
Introducing MolmoAct Our Action Reasoning Model that can Reason in Space (1/)🧵⬇️
@allen_ai
Ai2
3 months
🤖✨ What if models that take action in the physical world could think through your instructions? Meet MolmoAct, our new fully open Action Reasoning Model (ARM) that does just that. 🧵
7
37
268
@sundayrobotics
Sunday
4 days
November 19
58
55
586
@DynaRobotics
Dyna Robotics
12 days
Excited to share our latest progress on DYNA-1 pre-training! 🤖 The base model now can perform diverse, dexterous tasks (laundry folding, package sorting, …) without any post-training, even in unseen environments. This powerful base also allows extremely efficient fine-tuning
5
39
253
@ZeYanjie
Yanjie Ze
13 days
Excited to introduce TWIST2, our next-generation humanoid data collection system. TWIST2 is portable (use anywhere, no MoCap), scalable (100+ demos in 15 mins), and holistic (unlock major whole-body human skills). Fully open-sourced: https://t.co/fAlyD77DEt
20
108
453
@GeneralistAI
Generalist
14 days
Introducing GEN-0, our latest 10B+ foundation model for robots ⏱️ built on Harmonic Reasoning, new architecture that can think & act seamlessly 📈 strong scaling laws: more pretraining & model size = better 🌍 unprecedented corpus of 270,000+ hrs of dexterous data Read more 👇
49
284
1K
@jason_lee328
Jason Lee
21 days
Had an amazing time chatting with @chris_j_paxton @micoolcho on MolmoAct. Tune in for a deep dive on how we formulated Action Reasoning Data, the key that enables MolmoAct to Reason in Space
@RoboPapers
RoboPapers
21 days
Reasoning models have massively expanded what LLMs are capable of, but this hasn’t necessarily applied to robotics. Perhaps this is in part because robots need to reason over space, not just words and symbols; so the robotics version of a reasoning model would need to think in
2
2
14
@RoboPapers
RoboPapers
26 days
Full episode dropping soon! Geeking out with @jason_lee328 @hq_fang @DJiafei on MolmoAct: An Action Reasoning Model that reasons in 3D space https://t.co/S6STNfM0db Co-hosted by @micoolcho @chris_j_paxton
0
6
28
@Jesse_Y_Zhang
Jesse Zhang
1 month
How can we help *any* image-input policy generalize better? 👉 Meet PEEK 🤖 — a framework that uses VLMs to decide *where* to look and *what* to do, so downstream policies — from ACT, 3D-DA, or even π₀ — generalize more effectively! 🧵
1
31
121
@FuEnYang1
Fu-En (Fred) Yang
2 months
✨Thrilled to share that our paper “ThinkAct: Vision-Language-Action Reasoning via Reinforced Visual Latent Planning” has been accepted at #NeurIPS2025! Huge thanks to our amazing team for making this possible 🙌 @q0982569154, Yueh-Hua Wu, @CMHungSteven, Frank Wang, @FuEnYang1
@FuEnYang1
Fu-En (Fred) Yang
4 months
🤖 How can we teach embodied agents to think before they act? 🚀 Introducing ThinkAct — a hierarchical Reasoning VLA framework with an MLLM for complex, slow reasoning and an action expert for fast, grounded execution. Slow think, fast act. 🧠⚡🤲
0
2
12
@stevengongg
stevengongg
2 months
Yesterday marked @UWaterloo's first robot learning reading group for fall 2025, and it was a great success! This week focused on robot foundation models, covering Pi0 by @physical_int and LBM by @ToyotaResearch. Shoutout to @djkesu1 for helping cohost, and @palatialXR for
8
13
97
@jason_lee328
Jason Lee
2 months
Evaluation code for MolmoAct on LIBERO is now available: https://t.co/38BlwEdrIs Now with hf and vllm implementations!
Tweet card summary image
github.com
Official Repository for MolmoAct. Contribute to allenai/molmoact development by creating an account on GitHub.
@jason_lee328
Jason Lee
3 months
Introducing MolmoAct Our Action Reasoning Model that can Reason in Space (1/)🧵⬇️
0
0
8
@GeYan_21
Ge Yan
3 months
Introduce ManiFlow 🤖, a visual imitation learning policy for general robot manipulation that is efficient, robust, and generalizable: - 98.3% improvement on 8 real-world tasks, generalizing to novel objects & backgrounds - Applied to diverse embodiments: single-arm, bimanual &
8
60
219
@simonkalouche
Simon Kalouche
3 months
RL is getting very good at controls.
78
292
3K
@ZhiSu22
Zhi Su
3 months
🏓🤖 Our humanoid robot can now rally over 100 consecutive shots against a human in real table tennis — fully autonomous, sub-second reaction, human-like strikes.
118
563
3K
@xiao_ted
Ted Xiao
3 months
The new Gemini 2.5 Flash Image Preview (“nano banana”) model is out! The biggest jump ever in @lmarena_ai history may be justified: Unbelievably good character consistency, extremely strong instruction following for edits and adjustments, multi-image prompting 🚀
@GoogleDeepMind
Google DeepMind
3 months
Image generation with Gemini just got a bananas upgrade and is the new state-of-the-art image generation and editing model. 🤯 From photorealistic masterpieces to mind-bending fantasy worlds, you can now natively produce, edit and refine visuals with new levels of reasoning,
0
2
24
@jason_lee328
Jason Lee
3 months
Evaluation code for MolmoAct on SimplerEnv is now available at https://t.co/5HVpKDt7KM Please follow the instructions under
Tweet card summary image
github.com
Contribute to allenai/SimplerEnv development by creating an account on GitHub.
@jason_lee328
Jason Lee
3 months
Introducing MolmoAct Our Action Reasoning Model that can Reason in Space (1/)🧵⬇️
0
2
16
@jason_lee328
Jason Lee
3 months
Huge boost for the open model ecosystem — looking forward to what’s next
@allen_ai
Ai2
3 months
With fresh support of $75M from @NSF and $77M from @NVIDIA, we’re set to scale our open model ecosystem, bolster the infrastructure behind it, and fast‑track reproducible AI research to unlock the next wave of scientific discovery. 💡
0
0
13
@VentureBeat
VentureBeat
3 months
AI2's MolmoAct model ‘thinks in 3D’ to challenge Nvidia and Google in robotics AI https://t.co/DZXw7YHDML
1
2
11
@IlirAliu_
Ilir Aliu - eu/acc
3 months
First fully open Action Reasoning Model (ARM); can ‘think’ in 3D & turn your instructions into real-world actions: [📍 Bookmark for later] A model that reasons in space, time, and motion. It breaks down your command into three steps: ✅ Grounds the scene with depth-aware
9
74
449
@jason_lee328
Jason Lee
3 months
At Ai2, we envisioned MolmoAct as a fully open model from the start—a foundation for impactful research. Training code, evaluation code, and Data Processing scripts will be released soon. We are finalizing them for public release to ensure reproducibility and ease of use.
@jason_lee328
Jason Lee
3 months
Introducing MolmoAct Our Action Reasoning Model that can Reason in Space (1/)🧵⬇️
0
0
14
@RanjayKrishna
Ranjay Krishna
3 months
Most AI models still think in words. People, without even noticing, think with our bodies, planning how to move, grasp, and use things around us. MolmoAct brings that to robotics: reasoning in space before acting. This is how we will get to the GPT-moment for robotics.
@allen_ai
Ai2
3 months
🤖✨ What if models that take action in the physical world could think through your instructions? Meet MolmoAct, our new fully open Action Reasoning Model (ARM) that does just that. 🧵
1
13
72