RunpeiDong Profile Banner
Runpei Dong Profile
Runpei Dong

@RunpeiDong

Followers
408
Following
1K
Media
3
Statuses
100

CS PhD student @UofIllinois | Previously @Tsinghua_IIIS and XJTU | Interested in robot learning & machine learning

Champaign, IL
Joined April 2020
Don't wanna be here? Send us removal request.
@RunpeiDong
Runpei Dong
9 months
When @Xialin_He and I started working on our new G1 robot, we often found that every time the robot fell, picking it up manually was exhausting. Although the robot might get a few scratches, we were the ones getting a serious workout💪 from lifting it repeatedly (save a need to
1
2
19
@RunpeiDong
Runpei Dong
5 days
Thrilled to share our work AlphaOne🔥 at @emnlpmeeting 2025, @jyzhang1208 and I will be presenting this work online, and please feel free to join and talk to us!!! 📆Date: 8:00-9:00, Nov 7, Friday (Beijing Standard Time, UTC+8) 📺Session: Gather Session 4
@jyzhang1208
Junyu Zhang
5 days
💥Excited to share our paper “AlphaOne: Reasoning Models Thinking Slow and Fast at Test Time” at #EMNLP2025 🚀 this Friday, Nov. 7, during Gather Session 4. Come say hi virtually!👋 📄Paper: https://t.co/CksN8hEuoF 🪩Website & Code: https://t.co/AwMLAQFvtz #AI #LLMs #Reasoning
0
1
6
@pabbeel
Pieter Abbeel
1 month
ResMimic: learns a whole-body loco-manipulation policy on top of general motion tracking a policy Key ideas: (i) pre-train general motion tracking (ii) post-train task-specific residual policy with: (a) object tracking reward (b) contact reward (c) virtual object force
@SihengZhao
Siheng Zhao
1 month
ResMimic: a two-stage residual framework that unleashes the power of pre-trained general motion tracking policy. Enable expressive whole-body loco-manipulation with payloads up to 5.5kg without task-specific design, generalize across poses, and exhibit reactive behavior.
5
26
203
@zhenkirito123
Zhen Wu
1 month
Humanoid motion tracking performance is greatly determined by retargeting quality! Introducing 𝗢𝗺𝗻𝗶𝗥𝗲𝘁𝗮𝗿𝗴𝗲𝘁🎯, generating high-quality interaction-preserving data from human motions for learning complex humanoid skills with 𝗺𝗶𝗻𝗶𝗺𝗮𝗹 RL: - 5 rewards, - 4 DR
31
154
662
@RunpeiDong
Runpei Dong
1 month
Visual manipulation is really challenging for humanoids, and it is impressive to see such interesting results with a depth policy!
@ZeYanjie
Yanjie Ze
2 months
Our humanoid now learns loco-manip skills that generalize across space (Stanford) & time (day & night), using egocentric vision, trained only in simulation https://t.co/COANYh4tGV
0
0
1
@RunpeiDong
Runpei Dong
2 months
Check out Any2Track! Really impressive work on whole-body tracking!
@ericyi0124
Li Yi
2 months
🚀 Introducing Any2Track — a foundational tracker for humanoid motion tracking. Achieving reliable tracking of whole-body humanoid motions under real-world disturbances remains an open challenge, given the complexity of dynamics, frequent contacts, and unpredictable
0
0
1
@haoshu_fang
Hao-Shu Fang
2 months
How do we unlock the full dexterity of robot hands with data, even beyond what teleoperation can achieve? DEXOP captures natural human manipulation with full-hand tactile & proprio sensing, plus direct force feedback to users, without needing a robot👉 https://t.co/rjfQ9nzofm
31
277
1K
@catachiii
Jin Cheng
3 months
Getting up is nothing fancy nowadays but it still feels great to have it.
9
20
204
@qiyang_li
Qiyang Li
4 months
Everyone knows action chunking is great for imitation learning. It turns out that we can extend its success to RL to better leverage prior data for improved exploration and online sample efficiency! https://t.co/J5LdRRYbSH The recipe to achieve this is incredibly simple. 🧵 1/N
3
69
367
@CSProfKGD
Kosta Derpanis (sabbatical @ CMU)
4 months
Really cool invited talk by @SongShuran - “Making Video Model Useful for Robots”
@CSProfKGD
Kosta Derpanis (sabbatical @ CMU)
4 months
What an incredible setting for a workshop 😍
2
3
35
@_akhaliq
AK
4 months
DreamVLA A Vision-Language-Action Model Dreamed with Comprehensive World Knowledge
1
27
139
@yanawei_
Yana Wei
4 months
🔥 Thrilled to release our new multimodal RL work: Open Vision Reasoner! A powerful 7B model with SOTA performance on language & vision reasoning benchmarks, trained with nearly 1K steps of multimodal RL. Our journey begins with a central question: Can the cognitive behaviors
1
8
48
@HaoranGeng2
Haoran Geng
4 months
🤖 What if a humanoid robot could make a hamburger from raw ingredients—all the way to your plate? 🔥 Excited to announce ViTacFormer: our new pipeline for next-level dexterous manipulation with active vision + high-resolution touch. 🎯 For the first time ever, we demonstrate
10
120
477
@CyberRobooo
CyberRobo
4 months
AGIBOT X2-N (Nazhe) new video Shows the ability to carry goods blindly on stairs and slopes📦 The robot autonomously switches between bipedal and wheeled modes while maintaining balance and stability throughout the process — a feature that will be highly valuable in dim or
@CyberRobooo
CyberRobo
6 months
Well, a humanoid robot can move on two legs or wheels, depending on the environment it faces. AGIBOT (Zhiyuan) from Shanghai revealed that their new generation of humanoid robot "Nezha" can autonomously switch to two wheels for fast movement when walking on two legs. Imagine
2
36
151
@RunpeiDong
Runpei Dong
5 months
#RSS2025 Excited to be presenting our HumanUP tomorrow at the Humanoids Session (Sunday, June 22, 2025) 📺 Spotlight talk: 4:30pm–5:30pm, Bovard Auditorium 📜Poster: 6:30pm-8:00pm, #3, Associates Park
@RunpeiDong
Runpei Dong
9 months
When @Xialin_He and I started working on our new G1 robot, we often found that every time the robot fell, picking it up manually was exhausting. Although the robot might get a few scratches, we were the ones getting a serious workout💪 from lifting it repeatedly (save a need to
0
1
9
@RunpeiDong
Runpei Dong
5 months
Motion tracking is a hard problem, especially when you want to track a lot of motions with only a single policy. Good to know that MoE distilled student works so well, congrats @C___eric417 on such exciting results!
@C___eric417
Zixuan Chen
5 months
🚀Introducing GMT — a general motion tracking framework that enables high-fidelity motion tracking on humanoid robots by training a single policy from large, unstructured human motion datasets. 🤖A step toward general humanoid controllers. Project Website:
1
1
3
@TheHumanoidHub
The Humanoid Hub
5 months
This is 🤯 Figure 02 autonomously sorting and scanning packages, including deformable ones. The speed and dexterity are amazing.
89
368
2K
@SOTAMak1r
Junyi Chen
5 months
Can you imagine playing various games through an AI model? Like BlackMyth: Wukong.🤩 Sharing our latest work: DeepVerse, an autoregressive paradigm-based world model🌏DeepVerse can fantasize the entire world behind images and enable free exploration through interaction🎮.
12
34
246
@RunpeiDong
Runpei Dong
5 months
Very impressive results! I want my G1 to serve me a beer as well🍻
@li_yitang
Yitang Li
5 months
🤖Can a humanoid robot carry a full cup of beer without spilling while walking 🍺? Hold My Beer ! Introducing Hold My Beer🍺: Learning Gentle Humanoid Locomotion and End-Effector Stabilization Control Project: https://t.co/jUMwEVEyAX See more details below👇
1
0
2
@omarsar0
elvis
5 months
Reasoning Models Thinking Slow and Fast at Test Time Another super cool work on improving reasoning efficiency in LLMs. They show that slow-then-fast reasoning outperforms other strategies. Here are my notes:
11
60
283
@RunpeiDong
Runpei Dong
5 months
Thank @AK for sharing! I will post an introduction to our new work, AlphaOne, soon! Stay tuned!
@_akhaliq
AK
5 months
AlphaOne Reasoning Models Thinking Slow and Fast at Test Time
1
2
15