Runpei Dong
@RunpeiDong
Followers
408
Following
1K
Media
3
Statuses
100
CS PhD student @UofIllinois | Previously @Tsinghua_IIIS and XJTU | Interested in robot learning & machine learning
Champaign, IL
Joined April 2020
When @Xialin_He and I started working on our new G1 robot, we often found that every time the robot fell, picking it up manually was exhausting. Although the robot might get a few scratches, we were the ones getting a serious workout💪 from lifting it repeatedly (save a need to
1
2
19
Thrilled to share our work AlphaOne🔥 at @emnlpmeeting 2025, @jyzhang1208 and I will be presenting this work online, and please feel free to join and talk to us!!! 📆Date: 8:00-9:00, Nov 7, Friday (Beijing Standard Time, UTC+8) 📺Session: Gather Session 4
💥Excited to share our paper “AlphaOne: Reasoning Models Thinking Slow and Fast at Test Time” at #EMNLP2025 🚀 this Friday, Nov. 7, during Gather Session 4. Come say hi virtually!👋 📄Paper: https://t.co/CksN8hEuoF 🪩Website & Code: https://t.co/AwMLAQFvtz
#AI #LLMs #Reasoning
0
1
6
ResMimic: learns a whole-body loco-manipulation policy on top of general motion tracking a policy Key ideas: (i) pre-train general motion tracking (ii) post-train task-specific residual policy with: (a) object tracking reward (b) contact reward (c) virtual object force
ResMimic: a two-stage residual framework that unleashes the power of pre-trained general motion tracking policy. Enable expressive whole-body loco-manipulation with payloads up to 5.5kg without task-specific design, generalize across poses, and exhibit reactive behavior.
5
26
203
Humanoid motion tracking performance is greatly determined by retargeting quality! Introducing 𝗢𝗺𝗻𝗶𝗥𝗲𝘁𝗮𝗿𝗴𝗲𝘁🎯, generating high-quality interaction-preserving data from human motions for learning complex humanoid skills with 𝗺𝗶𝗻𝗶𝗺𝗮𝗹 RL: - 5 rewards, - 4 DR
31
154
662
Visual manipulation is really challenging for humanoids, and it is impressive to see such interesting results with a depth policy!
Our humanoid now learns loco-manip skills that generalize across space (Stanford) & time (day & night), using egocentric vision, trained only in simulation https://t.co/COANYh4tGV
0
0
1
Check out Any2Track! Really impressive work on whole-body tracking!
🚀 Introducing Any2Track — a foundational tracker for humanoid motion tracking. Achieving reliable tracking of whole-body humanoid motions under real-world disturbances remains an open challenge, given the complexity of dynamics, frequent contacts, and unpredictable
0
0
1
How do we unlock the full dexterity of robot hands with data, even beyond what teleoperation can achieve? DEXOP captures natural human manipulation with full-hand tactile & proprio sensing, plus direct force feedback to users, without needing a robot👉 https://t.co/rjfQ9nzofm
31
277
1K
Getting up is nothing fancy nowadays but it still feels great to have it.
9
20
204
Everyone knows action chunking is great for imitation learning. It turns out that we can extend its success to RL to better leverage prior data for improved exploration and online sample efficiency! https://t.co/J5LdRRYbSH The recipe to achieve this is incredibly simple. 🧵 1/N
3
69
367
DreamVLA A Vision-Language-Action Model Dreamed with Comprehensive World Knowledge
1
27
139
🔥 Thrilled to release our new multimodal RL work: Open Vision Reasoner! A powerful 7B model with SOTA performance on language & vision reasoning benchmarks, trained with nearly 1K steps of multimodal RL. Our journey begins with a central question: Can the cognitive behaviors
1
8
48
🤖 What if a humanoid robot could make a hamburger from raw ingredients—all the way to your plate? 🔥 Excited to announce ViTacFormer: our new pipeline for next-level dexterous manipulation with active vision + high-resolution touch. 🎯 For the first time ever, we demonstrate
10
120
477
AGIBOT X2-N (Nazhe) new video Shows the ability to carry goods blindly on stairs and slopes📦 The robot autonomously switches between bipedal and wheeled modes while maintaining balance and stability throughout the process — a feature that will be highly valuable in dim or
Well, a humanoid robot can move on two legs or wheels, depending on the environment it faces. AGIBOT (Zhiyuan) from Shanghai revealed that their new generation of humanoid robot "Nezha" can autonomously switch to two wheels for fast movement when walking on two legs. Imagine
2
36
151
#RSS2025 Excited to be presenting our HumanUP tomorrow at the Humanoids Session (Sunday, June 22, 2025) 📺 Spotlight talk: 4:30pm–5:30pm, Bovard Auditorium 📜Poster: 6:30pm-8:00pm, #3, Associates Park
When @Xialin_He and I started working on our new G1 robot, we often found that every time the robot fell, picking it up manually was exhausting. Although the robot might get a few scratches, we were the ones getting a serious workout💪 from lifting it repeatedly (save a need to
0
1
9
Motion tracking is a hard problem, especially when you want to track a lot of motions with only a single policy. Good to know that MoE distilled student works so well, congrats @C___eric417 on such exciting results!
🚀Introducing GMT — a general motion tracking framework that enables high-fidelity motion tracking on humanoid robots by training a single policy from large, unstructured human motion datasets. 🤖A step toward general humanoid controllers. Project Website:
1
1
3
This is 🤯 Figure 02 autonomously sorting and scanning packages, including deformable ones. The speed and dexterity are amazing.
89
368
2K
Can you imagine playing various games through an AI model? Like BlackMyth: Wukong.🤩 Sharing our latest work: DeepVerse, an autoregressive paradigm-based world model🌏DeepVerse can fantasize the entire world behind images and enable free exploration through interaction🎮.
12
34
246
Very impressive results! I want my G1 to serve me a beer as well🍻
🤖Can a humanoid robot carry a full cup of beer without spilling while walking 🍺? Hold My Beer ! Introducing Hold My Beer🍺: Learning Gentle Humanoid Locomotion and End-Effector Stabilization Control Project: https://t.co/jUMwEVEyAX See more details below👇
1
0
2
Reasoning Models Thinking Slow and Fast at Test Time Another super cool work on improving reasoning efficiency in LLMs. They show that slow-then-fast reasoning outperforms other strategies. Here are my notes:
11
60
283
Thank @AK for sharing! I will post an introduction to our new work, AlphaOne, soon! Stay tuned!
1
2
15