zhenkirito123 Profile Banner
Zhen Wu Profile
Zhen Wu

@zhenkirito123

Followers
2K
Following
324
Media
8
Statuses
25

Research Intern @ Amazon FAR (Frontier AI & Robotics). CS @Stanford. Humanoid Robots & Character Animation ๐Ÿค–

California, USA
Joined April 2022
Don't wanna be here? Send us removal request.
@zhenkirito123
Zhen Wu
10 days
I've long wondered if we can make a humanoid robot do a ๐˜„๐—ฎ๐—น๐—น๐—ณ๐—น๐—ถ๐—ฝ - and we just made it happen by leveraging ๐—ข๐—บ๐—ป๐—ถ๐—ฅ๐—ฒ๐˜๐—ฎ๐—ฟ๐—ด๐—ฒ๐˜ with BeyondMimic tracking! This came after our original OmniRetarget experiments, with only minor tweaks to RL training: relaxing a
@zhenkirito123
Zhen Wu
19 days
Humanoid motion tracking performance is greatly determined by retargeting quality! Introducing ๐—ข๐—บ๐—ป๐—ถ๐—ฅ๐—ฒ๐˜๐—ฎ๐—ฟ๐—ด๐—ฒ๐˜๐ŸŽฏ, generating high-quality interaction-preserving data from human motions for learning complex humanoid skills with ๐—บ๐—ถ๐—ป๐—ถ๐—บ๐—ฎ๐—น RL: - 5 rewards, - 4 DR
178
561
4K
@SihengZhao
Siheng Zhao
12 days
ResMimic: a two-stage residual framework that unleashes the power of pre-trained general motion tracking policy. Enable expressive whole-body loco-manipulation with payloads up to 5.5kg without task-specific design, generalize across poses, and exhibit reactive behavior.
10
71
310
@zhenkirito123
Zhen Wu
19 days
We are open-sourcing over 4 hours of high-quality, retargeted trajectories! Website: https://t.co/EeW71PaXVI ArXiv: https://t.co/8jxK1svcmT Datasets: https://t.co/xUq8seOxtM Huge shout out to the amazing team: @lujieyang98, @x_h_ucb, @akanazawa, @pabbeel, @carlo_sferrazza,
Tweet card summary image
huggingface.co
3
6
63
@zhenkirito123
Zhen Wu
19 days
Standing on the shoulders of giants! Our work builds on amazing research in the community๐Ÿ’ก. We use the "interaction mesh" ๐Ÿ•ธ๏ธ [1], [2] to preserve spatial relationships and leverage the minimal RL formulation from works like BeyondMimic [3]. Our long-horizon sequence is a nod to
1
1
38
@zhenkirito123
Zhen Wu
19 days
Our grand finale: A complex, long-horizon dynamic sequence, all driven by a proprioceptive-only policy (no vision/LIDAR)! In this task, the robot carries a chair to a platform, uses it as a step to climb up, then leaps off and performs a parkour-style roll to absorb the landing.
5
26
154
@zhenkirito123
Zhen Wu
19 days
But how much better is our data? ๐Ÿค” Compared to widely-used baselines, our motions show far fewer physical artifactsโ€”virtually zero foot-skating and penetrationโ€”while better preserving contact. This allows us to use an open-sourced RL framework (BeyondMimic) without
2
0
32
@zhenkirito123
Zhen Wu
19 days
And it's not just for a specific robot! Our framework is highly general and adapts to different robot embodiments, including the @UnitreeRobotics H1 and the @boosterobotics T1. We can retarget complex object-carrying and platform-climbing skills across these different robots with
1
0
33
@zhenkirito123
Zhen Wu
19 days
What about scalability? OmniRetarget transforms a SINGLE human demo into diverse motion clips. We can systematically vary terrain height, object size, and initial poses. Best of all, these augmented skills transfer directly from sim to our real-world hardware! ๐Ÿค–โžก๏ธ๐Ÿฆพ 4/9
1
2
36
@zhenkirito123
Zhen Wu
19 days
The result of this high-quality data? We can train diverse skills like box carrying ๐Ÿ“ฆ, slope crawling ๐Ÿพ, and platform climbing ๐Ÿง— with a radically simplified RL process! All policies use just 5 reward terms, achieving successful zero-shot sim-to-real transfer! ๐ŸŽฏโžก๏ธ๐Ÿฆพ 3/9
1
0
55
@zhenkirito123
Zhen Wu
19 days
Existing retargeting often produces artifacts like foot-skating and penetration โŒ. To compensate, RL policies rely on complex ad-hoc reward terms, forcing a trade-off between accurate motion tracking and correcting errors like slipping or bad contacts. OmniRetarget fixes this
3
5
72
@zhenkirito123
Zhen Wu
19 days
Humanoid motion tracking performance is greatly determined by retargeting quality! Introducing ๐—ข๐—บ๐—ป๐—ถ๐—ฅ๐—ฒ๐˜๐—ฎ๐—ฟ๐—ด๐—ฒ๐˜๐ŸŽฏ, generating high-quality interaction-preserving data from human motions for learning complex humanoid skills with ๐—บ๐—ถ๐—ป๐—ถ๐—บ๐—ฎ๐—น RL: - 5 rewards, - 4 DR
30
153
655
@carlo_sferrazza
Carlo Sferrazza
27 days
Excited to share that I'll be joining @UTAustin in Fall 2026 as an Assistant Professor with @utmechengr @texas_robotics! I'm looking for PhD students interested in humanoids, dexterous manipulation, tactile sensing, and robot learning in general -- consider applying this cycle!
48
38
458
@rocky_duan
Rocky Duan
2 months
We're hiring interns (and full-times) all year long! Please email me if interested.
41
85
2K
@eric_srchen
Sirui Chen
2 months
Introducing HEAD๐Ÿค–, an autonomous navigation and reaching system for humanoid robots, which allows the robot to navigate around obstacles and touch an object in the environment. More details on our website and CoRL paper: https://t.co/BH6m0Slwki
3
27
151
@qiayuanliao
Qiayuan Liao
2 months
Want to achieve extreme performance in motion trackingโ€”and go beyond it? Our preprint tech report is now online, with open-source code available!
36
246
1K
@hkz222
Kaizhe Hu
2 months
How do we learn motor skills directly in the real world? Think about learning to ride a bikeโ€”parents might be there to give you hands-on guidance.๐Ÿšฒ Can we apply this same idea to robots? Introducing Robot-Trains-Robot (RTR): a new framework for real-world humanoid learning.
16
36
187
@ZeYanjie
Yanjie Ze
3 months
Excited to open-source GMR: General Motion Retargeting. Real-time human-to-humanoid retargeting on your laptop. Supports diverse motion formats & robots. Unlock whole-body humanoid teleoperation (e.g., TWIST). video with ๐Ÿ”Š
22
114
699
@ZeYanjie
Yanjie Ze
6 months
๐Ÿค–Introducing TWIST: Teleoperated Whole-Body Imitation System. We develop a humanoid teleoperation system to enable coordinated, versatile, whole-body movements, using a single neural network. This is our first step toward general-purpose robots. ๐ŸŒ https://t.co/ScrdX8ImNF
16
92
436
@jiaman01
Jiaman Li
10 months
๐Ÿ”ฅ Introducing MVLift: Generate realistic 3D motion without any 3D training data - just using 2D poses from monocular videos! Applicable to human motion, human-object interaction & animal motion. Joint work w/ @jiajunwu_cs & Karen ๐Ÿ’ก How? We reformulate 3D motion estimation as
2
40
217
@jiaman01
Jiaman Li
10 months
๐Ÿค– Introducing Human-Object Interaction from Human-Level Instructions! First complete system that generates physically plausible, long-horizon human-object interactions with finger motions in contextual environments, driven by human-level instructions. ๐Ÿ” Our approach: - LLMs
18
112
517