ElijahGalahad Profile Banner
Haoyang Weng Profile
Haoyang Weng

@ElijahGalahad

Followers
740
Following
646
Media
33
Statuses
193

Undergraduate @Tsinghua_IIIS | Intern @LeCARLab | Machine leaning for robotics | Applying for PhD 26 fall

Joined December 2021
Don't wanna be here? Send us removal request.
@ElijahGalahad
Haoyang Weng
2 months
We present HDMI, a simple and general framework for learning whole-body interaction skills directly from human videos โ€” no manual reward engineering, no task-specific pipelines. ๐Ÿค– 67 door traversals, 6 real-world tasks, 14 in simulation. ๐Ÿ”— https://t.co/ll44sWTZF4
24
148
745
@ElijahGalahad
Haoyang Weng
2 days
Wow, zero gap visually?!
@xbpeng4
Jason Peng
2 days
MimicKit now supports #IsaacLab! After many years with IsaacGym, it's time to upgrade. MimicKit has a simple Engine API that allows you to easily swap between different simulator backends. Which simulator would you like to see next?
1
0
7
@waefrebeorn
WuBu โช‹ WaefreBeorn ๐Ÿ‡บ๐Ÿ‡ธ ๐Ÿ‘‘
2 days
@ElijahGalahad please consider my manifold optimizations I have recreated your spiral experiment with a 1,000,000 point cloud and make models that use geodesic topology instead of traditional 2d ai strats https://t.co/2RFOpmZeeW
1
1
1
@ElijahGalahad
Haoyang Weng
3 days
Loss type isnโ€™t the key variable, parameterization is. With same prediction space, v-, x-, and ฮต-losses merely reduce to different t-weighting. So the conclusion carries to all loss types. Checkout https://t.co/ohTZAoFlhr built on top of amazing @ZhiSu22.
Tweet card summary image
github.com
Unofficial implementation of the toy example in JiT https://arxiv.org/abs/2511.13720 - EGalahad/jit_toy_example
1
0
13
@ElijahGalahad
Haoyang Weng
3 days
Residual parameterizations can ๐œ๐ก๐š๐ง๐ ๐ž ๐ญ๐ก๐ž ๐ž๐Ÿ๐Ÿ๐ž๐œ๐ญ๐ข๐ฏ๐ž ๐ฉ๐ซ๐ž๐๐ข๐œ๐ญ๐ข๐จ๐ง ๐ญ๐š๐ซ๐ ๐ž๐ญ. They determine whether the model must carry high-dimensional noise through the network, or whether it can operate purely on the low-dimensional data manifold.
3
0
8
@ElijahGalahad
Haoyang Weng
3 days
You cannot discuss optimization without considering architecture. Parameterization changes everything: same objective can behave very differently. With a โ€œcleverโ€ residual, ฮต -prediction can match x -prediction by reparameterizing the output head. https://t.co/lGJ8aqECS3
@ElijahGalahad
Haoyang Weng
7 days
@YouJiacheng Yeah but I think the point is you want the network operate in a low dim space as the manifold, instead of high dim space as the input. Learning an identity means carrying the input all along the network. Inefficient and redundant given the discrepancy in input and manifold dim.
1
1
11
@ElijahGalahad
Haoyang Weng
3 days
Many say ฮต-prediction and x-prediction are just reparameterizations and should behave the same. Actuallyโ€ฆ ๐ข๐ญ ๐๐จ๐ž๐ฌ ๐š๐ง๐ ๐ข๐ญ ๐๐จ๐ž๐ฌ๐งโ€™๐ญ. In my extended toy experiment: โ€ข Vanilla MLP โ†’ x wins โ€ข Well-parameterized network โ†’ ฮต works fine as well
@ZhiSu22
Zhi Su
5 days
Wrote a repo to reproduce the results. Welcome to play with: https://t.co/hbGSz3IkAL
9
23
180
@ElijahGalahad
Haoyang Weng
6 days
You're my spiritual leader.
@TairanHe99
Tairan He
6 days
Zero teleoperation. Zero real-world data. โž” Autonomous humanoid loco-manipulation in reality. Introducing VIRAL: Visual Sim-to-Real at Scale. We achieved 54 autonomous cycles (walk, stand, place, pick, turn) using a simple recipe: 1. RL 2. Simulation 3. GPUs Website:
1
0
7
@ElijahGalahad
Haoyang Weng
7 days
Impressive long horizon, whole-body, generalizable dexterity! Congrats @sundayrobotics Curious about: 1. how costly is this map building process? 2. is the visual alignment done with a diffusion model or retargeting -> rendering -> inpainting stuff?
@tonyzzhao
Tony Zhao
7 days
Today, we present a step-change in robotic AI @sundayrobotics. Introducing ACT-1: A frontier robot foundation model trained on zero robot data. - Ultra long-horizon tasks - Zero-shot generalization - Advanced dexterity ๐Ÿงต->
1
0
48
@BenQingwei
Elgce
8 days
Introducing Gallant: Voxel Grid-based Humanoid Locomotion and Local-navigation across 3D Constrained Terrains ๐Ÿค– Project page: https://t.co/eC1ftH5ozx Arxiv: https://t.co/5K9sXDNQWv Gallant is, to our knowledge, the first system to run a single policy that handles full-space
1
33
186
@ElijahGalahad
Haoyang Weng
7 days
admiring the simplicity and intuition๐Ÿ˜
2
3
72
@ElijahGalahad
Haoyang Weng
7 days
A lot of people using HDMI codebase responded it trains really fast, e.g. the suitcase motion under one hour. These techniques are minimal but essential for its efficiency. A figure in the paper will never be intuitive as these videos. https://t.co/D3dzfIDswt
Tweet card summary image
github.com
Contribute to LeCAR-Lab/HDMI development by creating an account on GitHub.
0
0
0
@ElijahGalahad
Haoyang Weng
7 days
This is a #freelunch if you use teacher student training: just train teacher with residual actions and behavior clone a student without it.
1
0
0
@ElijahGalahad
Haoyang Weng
7 days
As these clips show, using residual actions (left), the policy explores ๐—น๐—ผ๐—ฐ๐—ฎ๐—น๐—น๐˜† ๐—ฎ๐—ฟ๐—ผ๐˜‚๐—ป๐—ฑ ๐˜๐—ต๐—ฒ ๐—ฟ๐—ฒ๐—ณ๐—ฒ๐—ฟ๐—ฒ๐—ป๐—ฐ๐—ฒ. Without it (right), episode initialized from kneeling will ๐—ฎ๐—ฏ๐—ฟ๐˜‚๐—ฝ๐˜๐—น๐˜† ๐—ฝ๐—ผ๐—ฝ ๐˜‚๐—ฝ, generating low-quality training samples.
1
0
0
@ElijahGalahad
Haoyang Weng
7 days
#freelunch series 1: Residual Action Space for motion tracking Use ๐š“๐š™๐š˜๐šœ_๐š๐šŠ๐š›๐š๐šŽ๐š = ๐š–๐š˜๐š๐š’๐š˜๐š—_๐š“๐š™๐š˜๐šœ + ๐šŠ๐šŒ๐š๐š’๐š˜๐š— instead of ๐š๐šŽ๐š๐šŠ๐šž๐š•๐š_๐š“๐š™๐š˜๐šœ + ๐šŠ๐šŒ๐š๐š’๐š˜๐š— for exploration. This is especially useful for motion far from the default pose, e.g. kneeling.
1
1
6
@TairanHe99
Tairan He
9 days
I believe @physical_int choose these 3 demos on purpose to show everyone they are capable of all the iconic demos that other startups do: making coffee -> @sundayrobotics folding laundry -> @DynaRobotics building boxes -> @GeneralistAI Now the burdenโ€™s on the rest.
11
10
177
@AlanPaulFern1
Alan Fern
12 days
Imagine moving a heavy object with a joystickโ€”through a swarm of quadruped-arm robots. ๐Ÿ•น๏ธ decPLM: decentralized RL for multi-robot pinch-lift-move. โ€ข No comms or rigid links โ€ข Hierarchical RL + constellation reward โ€ข 2โ†’ N robots, simโ†’real ๐Ÿ”— https://t.co/BPwqHV0ngE
15
110
610
@ElijahGalahad
Haoyang Weng
12 days
Universal retargeting for dexterous hands, humanoid, grounded in physics!
@ChaoyiPan
Chaoyi Pan
12 days
๐Ÿ•ธ๏ธ Introducing SPIDER โ€” Scalable Physics-Informed Dexterous Retargeting! A dynamically feasible, cross-embodiment retargeting framework for BOTH humanoids ๐Ÿค– and dexterous hands โœ‹. From human motion โ†’ sim โ†’ real robots, at scale. ๐Ÿ”— Website: https://t.co/ieZfG2Q4L0 ๐Ÿงต 1/n
1
2
31
@ElijahGalahad
Haoyang Weng
15 days
'Progress in robotics often feels slow day to day, but zoom out โ€” and itโ€™s staggering.'
@TairanHe99
Tairan He
15 days
Jan 2024: one humanoid stood up in a CMU lab. 20 months later: a day in the life of a humanoid at NVIDIA. Neither @zhengyiluo nor I couldโ€™ve imagined where this journey would lead โ€” but what a ride itโ€™s been. Progress in robotics often feels slow day to day, but zoom out โ€” and
1
0
8
@TairanHe99
Tairan He
15 days
Jan 2024: one humanoid stood up in a CMU lab. 20 months later: a day in the life of a humanoid at NVIDIA. Neither @zhengyiluo nor I couldโ€™ve imagined where this journey would lead โ€” but what a ride itโ€™s been. Progress in robotics often feels slow day to day, but zoom out โ€” and
@zhengyiluo
Zhengyi โ€œZenโ€ Luo
15 days
How do you give a humanoid the general motion capability? Not just single motions, but all motion? Introducing SONIC, our new work on supersizing motion tracking for natural humanoid control. We argue that motion tracking is the scalable foundation task for humanoids. So we
2
14
121
@GuanyaShi
Guanya Shi
15 days
When @TairanHe99 accepted the PhD offer and I decided to work on humanoids in Aug 2023, I told him: learning-based humanoid whole-body control is one of the hardest control problems โ€” โ€œnaiveโ€ sim2real just wonโ€™t work โ€” you could spend your whole PhD on it. Yet @TairanHe99
@TairanHe99
Tairan He
15 days
Jan 2024: one humanoid stood up in a CMU lab. 20 months later: a day in the life of a humanoid at NVIDIA. Neither @zhengyiluo nor I couldโ€™ve imagined where this journey would lead โ€” but what a ride itโ€™s been. Progress in robotics often feels slow day to day, but zoom out โ€” and
1
8
103