hq_fang Profile Banner
Haoquan Fang Profile
Haoquan Fang

@hq_fang

Followers
163
Following
166
Media
9
Statuses
77

Undergrad @UWCSE @UWStat | Student Researcher @Allen_AI @Ai2PRIOR

Joined November 2022
Don't wanna be here? Send us removal request.
@hq_fang
Haoquan Fang
3 months
We are launching MolmoAct🤖✨ A fully open Action Reasoning Model (ARM) that can reason in space: it perceives → it plans → it acts. 🧵👇
@allen_ai
Ai2
3 months
🤖✨ What if models that take action in the physical world could think through your instructions? Meet MolmoAct, our new fully open Action Reasoning Model (ARM) that does just that. 🧵
3
9
45
@DJiafei
Jiafei Duan
7 days
What happens when actions fail—and VLAs don’t know how to recover, especially when they’re frozen? We introduce corrective decoding: bootstrapping a pre-trained VLA with a companion VLM, FailSafe trained only from simulation, that detects impending failures and issues executable
3
9
73
@DJiafei
Jiafei Duan
9 hours
I’m on the academic market this year, and is activately seeking faculty position in robot learning. My work focuses on developing efficient robotics foundation models with strong priors for reasoning and generalization. Please ping me up if there is any opportunities!
1
4
71
@liujc1998
Jiacheng Liu
5 days
Our infini-gram mini paper received the Best Paper Award at #EMNLP2025 !! Really proud 🥹
@xuhaoxh
Hao Xu @EMNLP
5 months
Wanna 🔎 inside Internet-scale LLM training data w/o spending 💰💰💰? Introducing infini-gram mini, an exact-match search engine with 14x less storage req than the OG infini-gram 😎 We make 45.6 TB of text searchable. Read on to find our Web Interface, API, and more. (1/n) ⬇️
20
16
323
@hq_fang
Haoquan Fang
12 days
Such an amazing event held at Ai2! Very glad to present MolmoAct, our fully open Action Reasoning Model, to #SeattleAIWeek guests.
@allen_ai
Ai2
13 days
"Innovation in the Open" is under way at Ai2 HQ! Sharing with our #SeattleAIWeek guests our latest research and hands-on experiences with our cutting-edge tools.
1
0
7
@uwcse
Allen School
13 days
This year's @techreview #TR35 Asia Pacific honors a trio of familiar faces: #UWAllen professors @simonshaoleidu & @ranjaykrishna, and PhD alum @sewon__min of @allen_ai & @Berkeley_EECS! Read about their work advancing #AI, #LLMs, computer vision and more: https://t.co/Z4X8FMDhwu
4
10
65
@hq_fang
Haoquan Fang
14 days
The full video is here! Great thanks to @micoolcho and @chris_j_paxton for inviting and hosting this amazing talk on MolmoAct, an Action Reasoning Model that can reason in 3D space.
@RoboPapers
RoboPapers
15 days
Reasoning models have massively expanded what LLMs are capable of, but this hasn’t necessarily applied to robotics. Perhaps this is in part because robots need to reason over space, not just words and symbols; so the robotics version of a reasoning model would need to think in
0
3
21
@hq_fang
Haoquan Fang
20 days
Check more about MolmoAct in this podcast! This is the second time our paper is featured on @RoboPapers, where the first time was SAM2Act shared by @DJiafei.
@RoboPapers
RoboPapers
20 days
Full episode dropping soon! Geeking out with @jason_lee328 @hq_fang @DJiafei on MolmoAct: An Action Reasoning Model that reasons in 3D space https://t.co/S6STNfM0db Co-hosted by @micoolcho @chris_j_paxton
0
2
8
@shanli_xing
Shanli Xing
22 days
🤔 Can AI optimize the systems it runs on? 🚀 Introducing FlashInfer-Bench, a workflow that makes AI systems self-improving with agents: - Standardized signature for LLM serving kernels - Implement kernels with your preferred language - Benchmark them against real-world serving
3
45
143
@s_zhengbr
Brian Zheng
1 month
Can a LM that has only ever seen the word “cat” tokenized as ␣cat, understand the token sequence [␣, c, a, t]? In our NeurIPS spotlight ⭐, we show that the answer is surprisingly YES, and in fact, you can even modify the tokenization at inference-time for performance gains!🧵
5
16
82
@hq_fang
Haoquan Fang
2 months
Thrilled to share that our work SAM2Act won the best paper award at the RemembeRL workshop at CoRL25, after being selected for oral presentation at different workshops three times! Was really a pity that I cannot be in person at #CoRL2025. Huge thanks to @DJiafei for helping me
@DJiafei
Jiafei Duan
2 months
SAM2Act won best paper at the RememberRL workshop at CoRL. Credit all goes to @hq_fang, the lead of the work, unfortunately can’t be at #CoRL2025 He is also looking for a PhD this cycle.
1
4
33
@hq_fang
Haoquan Fang
2 months
The code for training replication has been released: https://t.co/I1DRnyNnaP We are still cleaning up the code for downstream fine-tuning and deployment. Please stay tuned!
Tweet card summary image
github.com
Official Repository for MolmoAct. Contribute to allenai/molmoact development by creating an account on GitHub.
@hq_fang
Haoquan Fang
3 months
We are launching MolmoAct🤖✨ A fully open Action Reasoning Model (ARM) that can reason in space: it perceives → it plans → it acts. 🧵👇
1
2
16
@xuhaoxh
Hao Xu @EMNLP
5 months
Wanna 🔎 inside Internet-scale LLM training data w/o spending 💰💰💰? Introducing infini-gram mini, an exact-match search engine with 14x less storage req than the OG infini-gram 😎 We make 45.6 TB of text searchable. Read on to find our Web Interface, API, and more. (1/n) ⬇️
6
32
112
@hq_fang
Haoquan Fang
3 months
Nice talk with @chrismatthieu on our recent project #MolmoAct 🙌 #RealSense is really one of the key components that enables robots to “see” in our development stack.
@chrismatthieu
Chris Matthieu
3 months
We all know that #RealSense brings 3D stereo vision to robots. Well there's a new open source robotics platform from @allen_ai called #MolmoAct that leverages RealSense to make robots "think" in 3D! Here's my interviews with the Team behind MolmoAct (#PhysicalAI) @DJiafei
0
0
3
@chris_j_paxton
Chris Paxton
3 months
This to me really feels like how robot foundation models "should" work. i like that it can autoregressively predict depth tokens, lift to 2.5d, and use this for reasoning - it feels like a true robotics analogue of modern reasoning LLMs. Really exciting work.
@DJiafei
Jiafei Duan
3 months
Reasoning is central to purposeful action. Today we introduce MolmoAct — a fully open Action Reasoning Model (ARM) for robotics. Grounded in large-scale pre-training with action reasoning data, every predicted action is interpretable and user-steerable via visual trace. We are
4
18
187
@hq_fang
Haoquan Fang
3 months
We will release all artifacts used in creating MolmoAct (data, training code, evaluations, intermediate checkpoints), furthering our commitment to open-source AI development and reproducibility. Please stay tuned for the following repo, and feel free to star or repost ✨
Tweet card summary image
github.com
Official Repository for MolmoAct. Contribute to allenai/molmoact development by creating an account on GitHub.
@hq_fang
Haoquan Fang
3 months
We are launching MolmoAct🤖✨ A fully open Action Reasoning Model (ARM) that can reason in space: it perceives → it plans → it acts. 🧵👇
0
0
9
@RanjayKrishna
Ranjay Krishna
3 months
Most AI models still think in words. People, without even noticing, think with our bodies, planning how to move, grasp, and use things around us. MolmoAct brings that to robotics: reasoning in space before acting. This is how we will get to the GPT-moment for robotics.
@allen_ai
Ai2
3 months
🤖✨ What if models that take action in the physical world could think through your instructions? Meet MolmoAct, our new fully open Action Reasoning Model (ARM) that does just that. 🧵
1
13
72
@RenZhongzheng
Jason Ren
3 months
Ai2's commitment to **fully openness** is serious and here comes another one! Checkout MolmoAct, great action reasoning model that could benefit the entire robotics field! AMAZING project led by my colleagues @DJiafei @jason_lee328 @hq_fang @RanjayKrishna
@allen_ai
Ai2
3 months
🤖✨ What if models that take action in the physical world could think through your instructions? Meet MolmoAct, our new fully open Action Reasoning Model (ARM) that does just that. 🧵
0
7
18
@DJiafei
Jiafei Duan
3 months
Reasoning is central to purposeful action. Today we introduce MolmoAct — a fully open Action Reasoning Model (ARM) for robotics. Grounded in large-scale pre-training with action reasoning data, every predicted action is interpretable and user-steerable via visual trace. We are
@allen_ai
Ai2
3 months
🤖✨ What if models that take action in the physical world could think through your instructions? Meet MolmoAct, our new fully open Action Reasoning Model (ARM) that does just that. 🧵
15
75
467