Zixuan Chen
@C___eric417
Followers
456
Following
496
Media
2
Statuses
96
PhD student at @UCSanDiego; Bachelor's Degree at @FudanUni
San Diego, CA
Joined August 2016
๐Introducing GMT โ a general motion tracking framework that enables high-fidelity motion tracking on humanoid robots by training a single policy from large, unstructured human motion datasets. ๐คA step toward general humanoid controllers. Project Website:
3
62
250
Pretty impressive!
Meet BFM-Zero: A Promptable Humanoid Behavioral Foundation Model w/ Unsupervised RL๐ https://t.co/3VdyRWgOqb ๐งฉONE latent space for ALL tasks โกZero-shot goal reaching, tracking, and reward optimization (any reward at test time), from ONE policy ๐คNatural recovery & transition
0
0
1
Embodied Avatar: Full-body Teleoperation Platform๐ฅณ Everyone has fantasized about having an embodied avatar! Full-body teleoperation and full-body data acquisition platform is waiting for you to try it out!
401
1K
8K
Introducing GEN-0, our latest 10B+ foundation model for robots โฑ๏ธ built on Harmonic Reasoning, new architecture that can think & act seamlessly ๐ strong scaling laws: more pretraining & model size = better ๐ unprecedented corpus of 270,000+ hrs of dexterous data Read more ๐
49
282
1K
We introduce PHUMA: a Physically-Grounded Humanoid Locomotion Dataset! โจ By using human video with physically grounded retargeting, PHUMA is 3x larger than AMASS leading to 20% better motion tracking policy for unseen human video. Project page: https://t.co/8ffYeLO407
6
15
82
Researchers at Beijing Academy of Artificial Intelligence (BAAI) trained a Unitree G1 to pull a 1,400 kg car.
190
458
3K
Simulation drives robotics progress, but how do we close the reality gap? Introducing GaussGym: an open-source framework for learning locomotion from pixels with ultra-fast parallelized photorealistic rendering across >4,000 iPhone, GrandTour, ARKit, and Veo scenes! Thread ๐งต
11
64
332
Implementing motion imitation methods involves lots of nuisances. Not many codebases get all the details right. So, we're excited to release MimicKit! https://t.co/7enUVUkc3h A framework with high quality implementations of our methods: DeepMimic, AMP, ASE, ADD, and more to come!
9
147
765
I have always been surprised by how few positive samples adversarial imitation learning needs to be effective. With ADD we take this to the extreme! A differential discriminator trained with a SINGLE positive sample can still be effective for a wide range of tasks.
Training RL agents often requires tedious reward engineering. ADD can help! ADD uses a differential discriminator to automatically turn raw errors into effective training rewards for a wide variety of tasks! ๐ Excited to share our latest work: Physics-Based Motion Imitation
5
23
165
Training RL agents often requires tedious reward engineering. ADD can help! ADD uses a differential discriminator to automatically turn raw errors into effective training rewards for a wide variety of tasks! ๐ Excited to share our latest work: Physics-Based Motion Imitation
6
52
291
Westlake Robotics just dropped the General Action Expert (GAE), a general large model that can generate arbitrary actions in real-time with very low latency. It allows the robot to become your physical avatar, executing any action like a shadow. #WestlakeRobotics #GAE #Robotics
The Unitree G1 just picked up a new trick! It's now showing off skills as your potential future household assistant, thanks to training from Westlake University's Lab for Trustworthy & General AI. Chores, anyone? ๐ค #Robotics #UnitreeG1 #AI
21
124
530
How can we build a general-purpose motion model for humanoid robots, so that they can perform a wide range of dexterous motions. A cool discussion with @C___eric417!
Weโve all seen videos of humanoid robots performing single tasks that are very impressive, like dancing or karate. But training humanoid robots to perform a wide range of complex motions is difficult. GMT is a general-purpose policy which can learn a wide range of robot motions.
3
18
138
Weโve all seen videos of humanoid robots performing single tasks that are very impressive, like dancing or karate. But training humanoid robots to perform a wide range of complex motions is difficult. GMT is a general-purpose policy which can learn a wide range of robot motions.
1
5
39
Full episode dropping soon! Geeking out with @C___eric417 GMT: General Motion Tracking for Humanoid Whole-Body Control https://t.co/UPP4JBV1pi Co-hosted by @micoolcho @chris_j_paxton
1
4
9
Full episode dropping soon! Geeking out with @C___eric417 GMT: General Motion Tracking for Humanoid Whole-Body Control https://t.co/UPP4JBUtzK Co-hosted by @micoolcho @chris_j_paxton
1
2
3
๐๐ค Our humanoid robot can now rally over 100 consecutive shots against a human in real table tennis โ fully autonomous, sub-second reaction, human-like strikes.
118
563
3K
Want to achieve extreme performance in motion trackingโand go beyond it? Our preprint tech report is now online, with open-source code available!
36
242
1K
super cool! Looking forward to the paper!
Excited to share our latest progress on building Behavior Foundation Model for Humanoid Robots๐ Forward roll, hip-pop dance, even cartwheel -- all the things you have never imagined the little G1 could do -- we have made it based on ONE model๐ Stay tuned for paper and code๐
1
0
4
They say the best time to tweet about your research was 1 year ago, the second best time is now. With RAI formerly known as Boston Dynamics AI Institute, we present DiffuseCloC - the first guidable physics-based diffusion model. https://t.co/P3PofBZl7t
7
60
363
How can we leverage diverse human videos to improve robot manipulation? Excited to introduce EgoVLA โ a Vision-Language-Action model trained on egocentric human videos by explicitly modeling wrist & hand motion. We build a shared action space between humans and robots, enabling
6
73
491
๐ Introducing LeVERB, the first ๐น๐ฎ๐๐ฒ๐ป๐ ๐๐ต๐ผ๐น๐ฒ-๐ฏ๐ผ๐ฑ๐ ๐ต๐๐บ๐ฎ๐ป๐ผ๐ถ๐ฑ ๐ฉ๐๐ (upper- & lower-body), trained on sim data and zero-shot deployed. Addressing interactive tasks: navigation, sitting, locomotion with verbal instruction. ๐งต https://t.co/LagyYCobiD
13
112
462