Zixuan Chen Profile
Zixuan Chen

@C___eric417

Followers
456
Following
496
Media
2
Statuses
96

PhD student at @UCSanDiego; Bachelor's Degree at @FudanUni

San Diego, CA
Joined August 2016
Don't wanna be here? Send us removal request.
@C___eric417
Zixuan Chen
5 months
๐Ÿš€Introducing GMT โ€” a general motion tracking framework that enables high-fidelity motion tracking on humanoid robots by training a single policy from large, unstructured human motion datasets. ๐Ÿค–A step toward general humanoid controllers. Project Website:
3
62
250
@C___eric417
Zixuan Chen
9 days
Pretty impressive!
@li_yitang
Yitang Li
9 days
Meet BFM-Zero: A Promptable Humanoid Behavioral Foundation Model w/ Unsupervised RL๐Ÿ‘‰ https://t.co/3VdyRWgOqb ๐ŸงฉONE latent space for ALL tasks โšกZero-shot goal reaching, tracking, and reward optimization (any reward at test time), from ONE policy ๐Ÿค–Natural recovery & transition
0
0
1
@UnitreeRobotics
Unitree
10 days
Embodied Avatar: Full-body Teleoperation Platform๐Ÿฅณ Everyone has fantasized about having an embodied avatar! Full-body teleoperation and full-body data acquisition platform is waiting for you to try it out!
401
1K
8K
@GeneralistAI
Generalist
12 days
Introducing GEN-0, our latest 10B+ foundation model for robots โฑ๏ธ built on Harmonic Reasoning, new architecture that can think & act seamlessly ๐Ÿ“ˆ strong scaling laws: more pretraining & model size = better ๐ŸŒ unprecedented corpus of 270,000+ hrs of dexterous data Read more ๐Ÿ‘‡
49
282
1K
@lee_kyungmin21
Kyungmin Lee
15 days
We introduce PHUMA: a Physically-Grounded Humanoid Locomotion Dataset! โœจ By using human video with physically grounded retargeting, PHUMA is 3x larger than AMASS leading to 20% better motion tracking policy for unseen human video. Project page: https://t.co/8ffYeLO407
6
15
82
@TheHumanoidHub
The Humanoid Hub
19 days
Researchers at Beijing Academy of Artificial Intelligence (BAAI) trained a Unitree G1 to pull a 1,400 kg car.
190
458
3K
@alescontrela
Alejandro Escontrela
26 days
Simulation drives robotics progress, but how do we close the reality gap? Introducing GaussGym: an open-source framework for learning locomotion from pixels with ultra-fast parallelized photorealistic rendering across >4,000 iPhone, GrandTour, ARKit, and Veo scenes! Thread ๐Ÿงต
11
64
332
@xbpeng4
Jason Peng
1 month
Implementing motion imitation methods involves lots of nuisances. Not many codebases get all the details right. So, we're excited to release MimicKit! https://t.co/7enUVUkc3h A framework with high quality implementations of our methods: DeepMimic, AMP, ASE, ADD, and more to come!
9
147
765
@xbpeng4
Jason Peng
1 month
I have always been surprised by how few positive samples adversarial imitation learning needs to be effective. With ADD we take this to the extreme! A differential discriminator trained with a SINGLE positive sample can still be effective for a wide range of tasks.
@ziyu_zhang73354
Ziyu (Charlotte) Zhang
1 month
Training RL agents often requires tedious reward engineering. ADD can help! ADD uses a differential discriminator to automatically turn raw errors into effective training rewards for a wide variety of tasks! ๐Ÿš€ Excited to share our latest work: Physics-Based Motion Imitation
5
23
165
@ziyu_zhang73354
Ziyu (Charlotte) Zhang
1 month
Training RL agents often requires tedious reward engineering. ADD can help! ADD uses a differential discriminator to automatically turn raw errors into effective training rewards for a wide variety of tasks! ๐Ÿš€ Excited to share our latest work: Physics-Based Motion Imitation
6
52
291
@XRoboHub
RoboHub๐Ÿค–
2 months
Westlake Robotics just dropped the General Action Expert (GAE), a general large model that can generate arbitrary actions in real-time with very low latency. It allows the robot to become your physical avatar, executing any action like a shadow. #WestlakeRobotics #GAE #Robotics
@XRoboHub
RoboHub๐Ÿค–
4 months
The Unitree G1 just picked up a new trick! It's now showing off skills as your potential future household assistant, thanks to training from Westlake University's Lab for Trustworthy & General AI. Chores, anyone? ๐Ÿค– #Robotics #UnitreeG1 #AI
21
124
530
@chris_j_paxton
Chris Paxton
2 months
How can we build a general-purpose motion model for humanoid robots, so that they can perform a wide range of dexterous motions. A cool discussion with @C___eric417!
@RoboPapers
RoboPapers
2 months
Weโ€™ve all seen videos of humanoid robots performing single tasks that are very impressive, like dancing or karate. But training humanoid robots to perform a wide range of complex motions is difficult. GMT is a general-purpose policy which can learn a wide range of robot motions.
3
18
138
@RoboPapers
RoboPapers
2 months
Weโ€™ve all seen videos of humanoid robots performing single tasks that are very impressive, like dancing or karate. But training humanoid robots to perform a wide range of complex motions is difficult. GMT is a general-purpose policy which can learn a wide range of robot motions.
1
5
39
@RoboPapers
RoboPapers
2 months
Full episode dropping soon! Geeking out with @C___eric417 GMT: General Motion Tracking for Humanoid Whole-Body Control https://t.co/UPP4JBV1pi Co-hosted by @micoolcho @chris_j_paxton
1
4
9
@RoboPapers
RoboPapers
2 months
Full episode dropping soon! Geeking out with @C___eric417 GMT: General Motion Tracking for Humanoid Whole-Body Control https://t.co/UPP4JBUtzK Co-hosted by @micoolcho @chris_j_paxton
1
2
3
@ZhiSu22
Zhi Su
3 months
๐Ÿ“๐Ÿค– Our humanoid robot can now rally over 100 consecutive shots against a human in real table tennis โ€” fully autonomous, sub-second reaction, human-like strikes.
118
563
3K
@qiayuanliao
Qiayuan Liao
3 months
Want to achieve extreme performance in motion trackingโ€”and go beyond it? Our preprint tech report is now online, with open-source code available!
36
242
1K
@C___eric417
Zixuan Chen
3 months
super cool! Looking forward to the paper!
@weishuaizeng
Weishuai Zeng
3 months
Excited to share our latest progress on building Behavior Foundation Model for Humanoid Robots๐ŸŽˆ Forward roll, hip-pop dance, even cartwheel -- all the things you have never imagined the little G1 could do -- we have made it based on ONE model๐Ÿ‘Œ Stay tuned for paper and code๐Ÿ˜‰
1
0
4
@TakaraTruong
Takara Truong
3 months
They say the best time to tweet about your research was 1 year ago, the second best time is now. With RAI formerly known as Boston Dynamics AI Institute, we present DiffuseCloC - the first guidable physics-based diffusion model. https://t.co/P3PofBZl7t
7
60
363
@RchalYang
Ruihan Yang
4 months
How can we leverage diverse human videos to improve robot manipulation? Excited to introduce EgoVLA โ€” a Vision-Language-Action model trained on egocentric human videos by explicitly modeling wrist & hand motion. We build a shared action space between humans and robots, enabling
6
73
491
@HaoruXue
Haoru Xue
5 months
๐Ÿš€ Introducing LeVERB, the first ๐—น๐—ฎ๐˜๐—ฒ๐—ป๐˜ ๐˜„๐—ต๐—ผ๐—น๐—ฒ-๐—ฏ๐—ผ๐—ฑ๐˜† ๐—ต๐˜‚๐—บ๐—ฎ๐—ป๐—ผ๐—ถ๐—ฑ ๐—ฉ๐—Ÿ๐—” (upper- & lower-body), trained on sim data and zero-shot deployed. Addressing interactive tasks: navigation, sitting, locomotion with verbal instruction. ๐Ÿงต https://t.co/LagyYCobiD
13
112
462