Yinhuai
@NliGjvJbycSeD6t
Followers
664
Following
93
Media
5
Statuses
27
PhD@HKUST|Humanoid, Manipulation, CV
Joined September 2019
Tired of designing task rewards? New work, PhysHOI, enable simulated humanoid to learn diverse basketball skills 🏀 purely from human demonstrations. Code available now! Site: https://t.co/LL2QIJV6VM Paper: https://t.co/kWFmsi0cSa Code/Data: https://t.co/B1bya43D10
3
43
220
🔥 #CVPR2025 Highlight Paper 🔥 Join us at Hall D #166! 📅 July 14, 5-7 PM 💬 Let's discuss Humanoid Skill Learning! 📜 Paper: “SkillMimic: Learning Basketball Interaction Skills from Demonstrations” 🏀 Project Page: https://t.co/23Ir3d8IVO 💻 Code/Data: https://t.co/nW1v2k9kNn
0
1
19
Ever seen a humanoid robot serve beer without spilling a drop? Now you have. 🍻 Introducing Hold My Beer: learning gentle locomotion + stable end-effector control.
lecar-lab.github.io
🤖Can a humanoid robot carry a full cup of beer without spilling while walking 🍺? Hold My Beer ! Introducing Hold My Beer🍺: Learning Gentle Humanoid Locomotion and End-Effector Stabilization Control Project: https://t.co/jUMwEVEyAX See more details below👇
1
9
63
Who can stop this guy🤭? Highly robust basketball skills powered by our #SIGGRAPH2025 work Project page: https://t.co/v38eGgwxvb There are more cases for enhancing general interaction and locomotion skills!
3
12
103
Trapped by data quality and volume challenges in imitation learning? Check our #SIGGRAPH2025 paper: 🤖SkillMimic-V2: Learning Robust and Generalizable Interaction Skills from Sparse and Noisy Demonstrations. 🌐 Page: https://t.co/tK6JhgD5MK 🧑🏻💻 Code: https://t.co/fhnKZwwcLD
0
15
37
Excited to share our latest work! 🤩 Masked Mimic 🥷: Unified Physics-Based Character Control Through Masked Motion Inpainting Project page: https://t.co/CbnEEs4NAv with: Yunrong (Kelly) Guo, @ofirnabati, @GalChechik and @xbpeng4. @SIGGRAPHAsia (ACM TOG). 1/ Read
13
97
413
WoCoCo is accepted by #CoRL2024. Codes will be released weeks later. (Complaint: 2 of 3 reviewers raised 90% of the questions and did not join the rebuttal. I feel it like an ICRA submission with long delayed notification. We have revised our manuscript accordingly tho.)
🚨 Without Any Motion Priors, how to make humanoids do versatile parkour jumping🦘, clapping dance🤸, cliff traversal🧗, and box pick-and-move📦 with a unified RL framework? Introduce WoCoCo: 🧗 Whole-body humanoid Control with sequential Contacts 🎯Unified designs for minimal
2
10
110
#CVPR2024 #HandObjectInteraction Excited to share our dataset TACO - the first large-scale real-world 4D bimanual tool usage dataset covering diverse Tool🥄-ACtion🙌-Object🍵 compositions and object geometries. Join us at Poster 213, Arch 4A-E, Fri 10:30am - noon PDT🍺.
3
51
258
From learning individual skills to composing them into a basketball-playing agent via hierarchical RL -- introducing SkillMimic, Learning Reusable Basketball Skills from Demonstrations 🌐: https://t.co/E2dSADhjE9 📜: https://t.co/DQXEljVOmO 🧑🏻💻: https://t.co/XKqAJOizb1 Work led
Simulated humanoid now learns how to handle a basketball🏀🏀🏀! New work, PhysHOI, led by @NliGjvJbycSeD6t, learns dynamic human objects (basketballs, grabbing, etc). Site🌐: https://t.co/pACKSELCc9 Paper📄: https://t.co/uutNg22QSE Code/Data🧑🏻💻: https://t.co/u028DRLhJh (coming
7
79
391
 🤖Introducing 📺𝗢𝗽𝗲𝗻-𝗧𝗲𝗹𝗲𝗩𝗶𝘀𝗶𝗼𝗻: a web-based teleoperation software!  🌐Open source, cross-platform (VisionPro & Quest) with real-time stereo vision feedback.  🕹️Easy-to-use hand, wrist, head pose streaming. Code: https://t.co/3lu5ZTMNfA
13
92
374
It took my brain a while to parse what's going on in this video. We are so obsessed with "human-level" robotics that we forget it is just an artificial ceiling. Why don't we make a new species superhuman from day one? Boston Dynamics has once again reinvented itself. Gradually,
158
381
3K
🤔 Ever wondered if simulation-based animation/avatar learnings can be applied to real humanoid in real-time? 🤖 Introducing H2O (Human2HumanOid): - 🧠 An RL-based human-to-humanoid real-time whole-body teleoperation framework - 💃 Scalable retargeting and training using large
4
67
278
This is how ALOHA's "teleoperation" system works - a fancy word for "remote control". Training robots will be more and more like playing games in the physical world. A human operates a "joystick++" to perform tasks and collect data, or intervene if there's any safety concern.
48
106
622
Figure-01 has learned to make coffee ☕️ Our AI learned this after watching humans make coffee This is end-to-end AI: our neural networks are taking video in, trajectories out Join us to train our robot fleet: https://t.co/egQy3iz3Ky
https://t.co/Y0ksEoHZsW
556
974
5K
You can now ask your simulated humanoid to perform actions, in REAL-TIME 👇🏻 Powered by the amazing EMDM (@frankzydou, @Alex_wangjingbo, etal) and PHC. EMDM: https://t.co/USGcRmhssX PHC: https://t.co/wOxdY1i24f Simulation: Isaac Gym
4
56
259
Interested in image restoration? Check our #ICLR2023 spotlight paper "Zero Shot Image Restoration Using Denoising Diffusion Null-Space Model" for a brand new perspective! 👻 https://t.co/qqSRGkSJJY
https://t.co/ScAOz93jtB
1
1
3
DDNM is a zero-shot image restoration model. It decomposes data space into the range/null space of the linear degradation operator and then refines an image in null space only to ensure data consistency while improving realness.
arxiv.org
Most existing Image Restoration (IR) models are task-specific, which can not be generalized to different degradation operators. In this work, we propose the Denoising Diffusion Null-Space Model...
0
2
6