x_h_ucb Profile Banner
Xiaoyu (Haytham) Huang Profile
Xiaoyu (Haytham) Huang

@x_h_ucb

Followers
244
Following
1
Media
6
Statuses
21

Ph.D. student @ UC Berkeley. Interested in learning-based locomotion/loco-manipulation.

Berkeley, CA
Joined September 2024
Don't wanna be here? Send us removal request.
@zhenkirito123
Zhen Wu
3 months
Humanoid motion tracking performance is greatly determined by retargeting quality! Introducing ๐—ข๐—บ๐—ป๐—ถ๐—ฅ๐—ฒ๐˜๐—ฎ๐—ฟ๐—ด๐—ฒ๐˜๐ŸŽฏ, generating high-quality interaction-preserving data from human motions for learning complex humanoid skills with ๐—บ๐—ถ๐—ป๐—ถ๐—บ๐—ฎ๐—น RL: - 5 rewards, - 4 DR
31
156
673
@x_h_ucb
Xiaoyu (Haytham) Huang
4 months
Really impressive work! Especially from an undergrad in under 3 months! Zhi is also applying for PhD this year!
@ZhiSu22
Zhi Su
4 months
๐Ÿ“๐Ÿค– Our humanoid robot can now rally over 100 consecutive shots against a human in real table tennis โ€” fully autonomous, sub-second reaction, human-like strikes.
0
0
16
@x_h_ucb
Xiaoyu (Haytham) Huang
4 months
The most interesting thing we learn is that adding basic domain randomization is already enough for dynamic motion tracking! Details in the preprint tech report.
@qiayuanliao
Qiayuan Liao
4 months
Want to achieve extreme performance in motion trackingโ€”and go beyond it? Our preprint tech report is now online, with open-source code available!
0
0
22
@qiayuanliao
Qiayuan Liao
4 months
Want to achieve extreme performance in motion trackingโ€”and go beyond it? Our preprint tech report is now online, with open-source code available!
34
240
1K
@rai_inst
RAI Institute
4 months
Researchers from RAI Institute present Diffuse-CLoC, a new control policy that fuses kinematic motion diffusion models with physics-based control to produce motions that are both physically realistic and precisely controllable. This breakthrough moves us closer to developing
2
39
173
@x_h_ucb
Xiaoyu (Haytham) Huang
4 months
DiffuseCLoC is a leap toward foundation control policies for high-DOF characters and humanoid robots. ๐Ÿ”— More demos and paper: https://t.co/PYWYywWSkJ ๐Ÿ’ก Internship project at @rai_inst ๐ŸŽค Catch our talk at SIGGRAPH 2025! #SIGGRAPH2025 #Humanoid #Animation #Robotics
diffusecloc.github.io
A guided diffusion framework enabling steerable and physically realistic motion generation for character control.
2
1
4
@x_h_ucb
Xiaoyu (Haytham) Huang
4 months
๐Ÿง—โ€โ™‚๏ธ And yes... parkour. Because guided diffusion doesnโ€™t just generalize โ€” it moves with agility, dynamics, and style.
1
0
5
@x_h_ucb
Xiaoyu (Haytham) Huang
4 months
โœจ Need your character to hit a few key poses? Just inpaint those desired states into the prediction horizon โ€” DiffuseCLoC fills in the motion seamlessly, even if it's never seen those keypoints before. Perfect for animation, editing, or planning.
1
0
3
@x_h_ucb
Xiaoyu (Haytham) Huang
4 months
๐ŸŽฎ Plug in a joystick โ€” and youโ€™re in control. With just an L2 cost on velocity + height, DiffuseCLoC delivers responsive interactive control for navigation and motion. Furthermore, it runs in real-time!
1
0
4
@x_h_ucb
Xiaoyu (Haytham) Huang
4 months
๐Ÿงญ Want to avoid obstacles on the fly? Just define a cost function. Using simple SDF + L2 costs, DiffuseCLoC dodges other characters and pillars with smooth, lifelike agility โ€” zero retraining.
1
0
4
@x_h_ucb
Xiaoyu (Haytham) Huang
4 months
๐Ÿค– End-to-end. Predictive. Powerful. DiffuseCLoC directly predicts future states and motor PD targets โ€” then optimizes them to hit any goal at test time. ๐Ÿ’กThis turns learning-based control into optimization during inference โ€” a new paradigm for motion intelligence.
1
0
11
@x_h_ucb
Xiaoyu (Haytham) Huang
4 months
๐Ÿ”ฅ Proud to present our SIGGRAPH 2025 Journal paper โ€” DiffuseCLoC: Guided Diffusion for Physics-Based Character Look-Ahead Control๐Ÿš€ A single, generalist control policy that solves diverse tasks at test-time, no fine-tuning needed. ๐ŸŽฅ Proj Page: https://t.co/MQ85X5Li7N ๐ŸงตThread๐Ÿ‘‡
6
30
164
@x_h_ucb
Xiaoyu (Haytham) Huang
6 months
With high-fidelity simulation and ray-tracing rendering, we can minimize the sim-to-real gap and achieve zero-shot sim-to-real transfer! We hope this is a critical step for scaling humanoid-specific data that is scarce atm.
@HaoruXue
Haoru Xue
6 months
๐Ÿš€ Introducing LeVERB, the first ๐—น๐—ฎ๐˜๐—ฒ๐—ป๐˜ ๐˜„๐—ต๐—ผ๐—น๐—ฒ-๐—ฏ๐—ผ๐—ฑ๐˜† ๐—ต๐˜‚๐—บ๐—ฎ๐—ป๐—ผ๐—ถ๐—ฑ ๐—ฉ๐—Ÿ๐—” (upper- & lower-body), trained on sim data and zero-shot deployed. Addressing interactive tasks: navigation, sitting, locomotion with verbal instruction. ๐Ÿงต https://t.co/LagyYCobiD
0
1
5
@HaoruXue
Haoru Xue
6 months
๐Ÿš€ Introducing LeVERB, the first ๐—น๐—ฎ๐˜๐—ฒ๐—ป๐˜ ๐˜„๐—ต๐—ผ๐—น๐—ฒ-๐—ฏ๐—ผ๐—ฑ๐˜† ๐—ต๐˜‚๐—บ๐—ฎ๐—ป๐—ผ๐—ถ๐—ฑ ๐—ฉ๐—Ÿ๐—” (upper- & lower-body), trained on sim data and zero-shot deployed. Addressing interactive tasks: navigation, sitting, locomotion with verbal instruction. ๐Ÿงต https://t.co/LagyYCobiD
13
112
463
@ZhongyuLi4
Zhongyu Li
1 year
Just presented DiffuseLoco at #CoRL2024! DiffuseLoco learns diverse, multimodal skills for legged robots purely from offline datasets with a 6.8M transformer DDPM (YES it runs onboard at 30Hz!) A step towards large-scale learning for control. Code & ckpts๐Ÿ‘‰ https://t.co/CqX0EDjvHk
1
21
112
@OrdonezApraez
Daniel Felipe Ordoรฑez Apraez ๐Ÿ‡จ๐Ÿ‡ด
1 year
Kudos to @ZhiSu22 @x_h_ucb for our #IROS24 paper exploiting morphological symmetries in model-free RL locomotion/ manipulation.
@ZhongyuLi4
Zhongyu Li
1 year
Exploiting morphological symmetries can enhance model-free RL sample efficiency, policy optimality, and sim2real transfer in legged locomotion and manipulation. Our #IROS2024 paper exploits these symmetries for RL methods. Code is open-sourced: ๐ŸŒ https://t.co/1I2TMl0sfW๐Ÿงต
1
3
6
@SpaceX
SpaceX
1 year
Mechazilla has caught the Super Heavy booster!
11K
62K
251K
@elonmusk
Elon Musk
1 year
Good morning
18K
32K
499K
@ZhongyuLi4
Zhongyu Li
1 year
Exploiting morphological symmetries can enhance model-free RL sample efficiency, policy optimality, and sim2real transfer in legged locomotion and manipulation. Our #IROS2024 paper exploits these symmetries for RL methods. Code is open-sourced: ๐ŸŒ https://t.co/1I2TMl0sfW๐Ÿงต
1
8
28
@ZhongyuLi4
Zhongyu Li
1 year
Introducing HiLMa-Res: a hierarchical RL framework for quadrupeds to tackle loco-manipulation tasks with sustained mobility! Designed for general learning tasks (vision-based, state-based, real-world data, etc), the robot now can step over stones๐Ÿพ/navigate boxes๐Ÿ“ฆ/dribbleโšฝ.
4
23
125