LeCAR Lab at CMU
@LeCARLab
Followers
502
Following
57
Media
0
Statuses
34
Learning and Control for Agile Robotics Lab at @CarnegieMellon @SCSatCMU @CMU_Robotics.
Pittsburgh
Joined August 2023
🕸️ Introducing SPIDER — Scalable Physics-Informed Dexterous Retargeting! A dynamically feasible, cross-embodiment retargeting framework for BOTH humanoids 🤖 and dexterous hands ✋. From human motion → sim → real robots, at scale. 🔗 Website: https://t.co/ieZfG2Q4L0 🧵 1/n
12
61
228
Introduce SPIDER - a unified scalable dynamics-level retargeting method for both dex hand and humanoid! What does "dynamics-level retargeting" mean? Human motions in, physically feasible high-quality robot motions out (BOTH state and action). These robot motions are so good
🕸️ Introducing SPIDER — Scalable Physics-Informed Dexterous Retargeting! A dynamically feasible, cross-embodiment retargeting framework for BOTH humanoids 🤖 and dexterous hands ✋. From human motion → sim → real robots, at scale. 🔗 Website: https://t.co/ieZfG2Q4L0 🧵 1/n
1
9
69
Meet BFM-Zero: A Promptable Humanoid Behavioral Foundation Model w/ Unsupervised RL👉 https://t.co/3VdyRWgOqb 🧩ONE latent space for ALL tasks ⚡Zero-shot goal reaching, tracking, and reward optimization (any reward at test time), from ONE policy 🤖Natural recovery & transition
5
72
254
Excited to release BFM-Zero, an unsupervised RL approach to learn humanoid Behavior Foundation Model. Existing humanoid general whole-body controllers rely on explicit motion tracking rewards, on-policy PG methods like PPO, and distillation to one policy. In contrast, BFM-Zero
lecar-lab.github.io
Meet BFM-Zero: A Promptable Humanoid Behavioral Foundation Model w/ Unsupervised RL👉 https://t.co/3VdyRWgOqb 🧩ONE latent space for ALL tasks ⚡Zero-shot goal reaching, tracking, and reward optimization (any reward at test time), from ONE policy 🤖Natural recovery & transition
2
13
98
✈️🤖 What if an embodiment-agnostic visuomotor policy could adapt to diverse robot embodiments at inference with no fine-tuning? Introducing UMI-on-Air, a framework that brings embodiment-aware guidance to diffusion policies for precise, contact-rich aerial manipulation.
8
33
210
100% human data -> long-horizon💡insertion on an aerial manipulator Key: Embodiment-aware diffusion policy (EADP) steers UMI's embodiment-agnostic DP using the gradient of the low-level controller's tracking error. My favorite: We quantify how "UMI-able" different robots are.
✈️🤖 What if an embodiment-agnostic visuomotor policy could adapt to diverse robot embodiments at inference with no fine-tuning? Introducing UMI-on-Air, a framework that brings embodiment-aware guidance to diffusion policies for precise, contact-rich aerial manipulation.
0
4
12
Didn't get chance to attend #CoRL2025 in person, but students from @LeCARLab will present three papers: - Sampling-based system ID for legged sim2real https://t.co/sTYRqgctBn at Oral5 by @NikhilSoban353 - HoldMyBeer: learning humanoid end-effector stabilization control
1
11
60
We present HDMI, a simple and general framework for learning whole-body interaction skills directly from human videos — no manual reward engineering, no task-specific pipelines. 🤖 67 door traversals, 6 real-world tasks, 14 in simulation. 🔗 https://t.co/ll44sWTZF4
24
148
745
On my way ✈️ to ATL for @ieee_ras_icra! @LeCARLab will present 8 conference papers (including DIAL-MPC as the Best Paper Finalist) and one RA-L paper. Details: https://t.co/GKJwGkGkjD Hope to meet old & new friends and chat about building generalist 🤖 with agility 🚀
0
19
108
🦾How can humanoids unlock real strength for heavy-duty loco-manipulation? Meet FALCON🦅: Learning Force-Adaptive Humanoid Loco-Manipulation. 🌐: https://t.co/vMmJneIlDk See the details below👇:
4
53
183
I've been working on dynamics model learning in robotics for >6 years (from the Neural-Lander paper). Training a small DNN + some regularization + some online weight adaptation have proven to be effective for specific robots in specific tasks. LLMs have shown the power of
🚨🚨Can generalist robotics models perform agile tasks? Introducing 𝗔𝗻𝘆𝗖𝗮𝗿🏎️ 🚚 🚗, a transformer-based 𝗴𝗲𝗻𝗲𝗿𝗮𝗹𝗶𝘀𝘁 vehicle dynamics model that can adapt to various cars, tasks, and envs via in-context adaptation, 𝗼𝘂𝘁𝗽𝗲𝗿𝗳𝗼𝗿𝗺𝗶𝗻𝗴 𝘄𝗲𝗹𝗹-𝘁𝘂𝗻𝗲𝗱
2
27
128
🎙️I gave a talk "Building Generalist Robots with Agility via Learning and Control: Humanoids and Beyond" at the CMU RI Seminar and Michigan AI Symposium. It covers @LeCARLab's research and my thoughts on building agile generalist robots. Recording:
1
25
101
H2O (👉 https://t.co/JvgjKGiqSi) and OmniH2O (👉 https://t.co/XRxdXIUDUX) are open-sourced! Check out our fully open-source code: https://t.co/w15CM2rsXr, featuring simulation training, motion data retargeting, and real-world deployment. Have fun with your humanoids!
github.com
[IROS 2024] Learning Human-to-Humanoid Real-Time Whole-Body Teleoperation. [CoRL 2024] OmniH2O: Universal and Dexterous Human-to-Humanoid Whole-Body Teleoperation and Learning - ...
🤖 Introducing H2O (Human2HumanOid): - 🧠 An RL-based human-to-humanoid real-time whole-body teleoperation framework - 💃 Scalable retargeting and training using large human motion dataset - 🎥 With just an RGB camera, everyone can teleoperate a full-sized humanoid to perform
1
43
158
🎉 Diffusion-style annealing + sampling-based MPC can surpass RL, and seamlessly adapt to task parameters, all 𝘁𝗿𝗮𝗶𝗻𝗶𝗻𝗴-𝗳𝗿𝗲𝗲! We open sourced DIAL-MPC, the first training-free method for whole-body torque control using full-order dynamics 🧵 https://t.co/wIaaT5CTEH
10
160
747
Thanks @_akhaliq! Finally, robot can do continuous, agile, autonomous, adaptive jumping over stair and stepping stone Key idea: combine the pros of model-free RL and model-based control. RL (for CoM refs) + QP (for GRF) + WBC (for torque) Open-sourced: https://t.co/qgKsdVMcuC
Agile Continuous Jumping in Discontinuous Terrains discuss: https://t.co/2beMBjQUTQ We focus on agile, continuous, and terrain-adaptive jumping of quadrupedal robots in discontinuous terrains such as stairs and stepping stones. Unlike single-step jumping, continuous jumping
4
34
169
The CMU LeCAR Lab will present several papers at #RSS2024 this week, covering topics from learning & control & optimization, safe autonomy, to humanoids. Check them out!
0
14
105
This is the most artistic project I've ever worked on🎨 Our aerial manipulation can achieve precise contact force and motion planning and tracking, which enables the calligraphy demos in the video. Beyond calligraphy, it can be applied in many high-altitude tasks such as
We introduce 𝐅𝐥𝐲𝐢𝐧𝐠 𝐂𝐚𝐥𝐥𝐢𝐠𝐫𝐚𝐩𝐡𝐞𝐫, an aerial manipulation system that can draw various calligraphy artworks: 🎯Contact-aware trajectory planning and hybrid control ✏️Intuitive user interface and novel end-effector design 🧑🎨UAM can draw letters with changing
6
19
148
We introduce 𝐅𝐥𝐲𝐢𝐧𝐠 𝐂𝐚𝐥𝐥𝐢𝐠𝐫𝐚𝐩𝐡𝐞𝐫, an aerial manipulation system that can draw various calligraphy artworks: 🎯Contact-aware trajectory planning and hybrid control ✏️Intuitive user interface and novel end-effector design 🧑🎨UAM can draw letters with changing
3
37
164
🚨 Without Any Motion Priors, how to make humanoids do versatile parkour jumping🦘, clapping dance🤸, cliff traversal🧗, and box pick-and-move📦 with a unified RL framework? Introduce WoCoCo: 🧗 Whole-body humanoid Control with sequential Contacts 🎯Unified designs for minimal
10
50
238
Introduce OmniH2O, a learning-based system for whole-body humanoid teleop and autonomy: 🦾Robust loco-mani policy 🦸Universal teleop interface: VR, verbal, RGB 🧠Autonomy via @chatgpt4o or imitation 🔗Release the first whole-body humanoid dataset https://t.co/XRxdXIVbKv
24
70
415