Xiaoyu (Haytham) Huang
@x_h_ucb
Followers
244
Following
1
Media
6
Statuses
21
Ph.D. student @ UC Berkeley. Interested in learning-based locomotion/loco-manipulation.
Berkeley, CA
Joined September 2024
Humanoid motion tracking performance is greatly determined by retargeting quality! Introducing ๐ข๐บ๐ป๐ถ๐ฅ๐ฒ๐๐ฎ๐ฟ๐ด๐ฒ๐๐ฏ, generating high-quality interaction-preserving data from human motions for learning complex humanoid skills with ๐บ๐ถ๐ป๐ถ๐บ๐ฎ๐น RL: - 5 rewards, - 4 DR
31
156
673
The most interesting thing we learn is that adding basic domain randomization is already enough for dynamic motion tracking! Details in the preprint tech report.
Want to achieve extreme performance in motion trackingโand go beyond it? Our preprint tech report is now online, with open-source code available!
0
0
22
Want to achieve extreme performance in motion trackingโand go beyond it? Our preprint tech report is now online, with open-source code available!
34
240
1K
Researchers from RAI Institute present Diffuse-CLoC, a new control policy that fuses kinematic motion diffusion models with physics-based control to produce motions that are both physically realistic and precisely controllable. This breakthrough moves us closer to developing
2
39
173
DiffuseCLoC is a leap toward foundation control policies for high-DOF characters and humanoid robots. ๐ More demos and paper: https://t.co/PYWYywWSkJ ๐ก Internship project at @rai_inst ๐ค Catch our talk at SIGGRAPH 2025! #SIGGRAPH2025 #Humanoid #Animation #Robotics
diffusecloc.github.io
A guided diffusion framework enabling steerable and physically realistic motion generation for character control.
2
1
4
๐งโโ๏ธ And yes... parkour. Because guided diffusion doesnโt just generalize โ it moves with agility, dynamics, and style.
1
0
5
โจ Need your character to hit a few key poses? Just inpaint those desired states into the prediction horizon โ DiffuseCLoC fills in the motion seamlessly, even if it's never seen those keypoints before. Perfect for animation, editing, or planning.
1
0
3
๐ฎ Plug in a joystick โ and youโre in control. With just an L2 cost on velocity + height, DiffuseCLoC delivers responsive interactive control for navigation and motion. Furthermore, it runs in real-time!
1
0
4
๐งญ Want to avoid obstacles on the fly? Just define a cost function. Using simple SDF + L2 costs, DiffuseCLoC dodges other characters and pillars with smooth, lifelike agility โ zero retraining.
1
0
4
๐ค End-to-end. Predictive. Powerful. DiffuseCLoC directly predicts future states and motor PD targets โ then optimizes them to hit any goal at test time. ๐กThis turns learning-based control into optimization during inference โ a new paradigm for motion intelligence.
1
0
11
๐ฅ Proud to present our SIGGRAPH 2025 Journal paper โ DiffuseCLoC: Guided Diffusion for Physics-Based Character Look-Ahead Control๐ A single, generalist control policy that solves diverse tasks at test-time, no fine-tuning needed. ๐ฅ Proj Page: https://t.co/MQ85X5Li7N ๐งตThread๐
6
30
164
With high-fidelity simulation and ray-tracing rendering, we can minimize the sim-to-real gap and achieve zero-shot sim-to-real transfer! We hope this is a critical step for scaling humanoid-specific data that is scarce atm.
๐ Introducing LeVERB, the first ๐น๐ฎ๐๐ฒ๐ป๐ ๐๐ต๐ผ๐น๐ฒ-๐ฏ๐ผ๐ฑ๐ ๐ต๐๐บ๐ฎ๐ป๐ผ๐ถ๐ฑ ๐ฉ๐๐ (upper- & lower-body), trained on sim data and zero-shot deployed. Addressing interactive tasks: navigation, sitting, locomotion with verbal instruction. ๐งต https://t.co/LagyYCobiD
0
1
5
๐ Introducing LeVERB, the first ๐น๐ฎ๐๐ฒ๐ป๐ ๐๐ต๐ผ๐น๐ฒ-๐ฏ๐ผ๐ฑ๐ ๐ต๐๐บ๐ฎ๐ป๐ผ๐ถ๐ฑ ๐ฉ๐๐ (upper- & lower-body), trained on sim data and zero-shot deployed. Addressing interactive tasks: navigation, sitting, locomotion with verbal instruction. ๐งต https://t.co/LagyYCobiD
13
112
463
Just presented DiffuseLoco at #CoRL2024! DiffuseLoco learns diverse, multimodal skills for legged robots purely from offline datasets with a 6.8M transformer DDPM (YES it runs onboard at 30Hz!) A step towards large-scale learning for control. Code & ckpts๐ https://t.co/CqX0EDjvHk
1
21
112
Kudos to @ZhiSu22 @x_h_ucb for our #IROS24 paper exploiting morphological symmetries in model-free RL locomotion/ manipulation.
Exploiting morphological symmetries can enhance model-free RL sample efficiency, policy optimality, and sim2real transfer in legged locomotion and manipulation. Our #IROS2024 paper exploits these symmetries for RL methods. Code is open-sourced: ๐ https://t.co/1I2TMl0sfW๐งต
1
3
6
Exploiting morphological symmetries can enhance model-free RL sample efficiency, policy optimality, and sim2real transfer in legged locomotion and manipulation. Our #IROS2024 paper exploits these symmetries for RL methods. Code is open-sourced: ๐ https://t.co/1I2TMl0sfW๐งต
1
8
28
Introducing HiLMa-Res: a hierarchical RL framework for quadrupeds to tackle loco-manipulation tasks with sustained mobility! Designed for general learning tasks (vision-based, state-based, real-world data, etc), the robot now can step over stones๐พ/navigate boxes๐ฆ/dribbleโฝ.
4
23
125