Huaijin Pi
@HuaijinPi
Followers
61
Following
11
Media
7
Statuses
13
Ph.D. student at the University of Hong Kong
Joined March 2022
🚀 Excited to share our NeurIPS 2025 paper: CoDA: Coordinated Diffusion Noise Optimization for Whole-Body Manipulation of Articulated Objects 🔗 Project page: https://t.co/UuY2onPxjH 🔗 Code: https://t.co/8ozQUSUqiF 🔗 Paper: https://t.co/ga6gT0RL4h
5
15
84
Work hard for this project!
✨We are excited to open-source Tencent HY-Motion 1.0, a billion-parameter text-to-motion model built on the Diffusion Transformer (DiT) architecture and flow matching. Tencent HY-Motion 1.0 empowers developers and individual creators alike by transforming natural language into
0
1
1
Please check out paper #MOSPA "🎧Human Motion Generation Driven by Spatial Audio” at #NeurIPS2025 (🌟Spotlight)! 😊We have released our dataset and models : ) 💡The paper tackles the challenge of spatial-audio-driven human motion generation, enabling virtual humans to respond
Excited to share our latest work on 🎧spatial audio-driven human motion generation. We aim to tackle a largely underexplored yet important problem of enabling virtual humans to move naturally in response to spatial audio—capturing not just what is heard, but also where the sound
0
10
20
Come meet us at San Diego Poster! 🎉 📍 Exhibit Hall C,D,E — #5207 🕚 Wed, Dec 3 | 11 a.m.–2 p.m. PST Huge thanks to my amazing collaborators: Zhi Cen, @frankzydou, and, Taku Komura
0
0
1
CoDA is not limited to articulated objects — it also supports rigid object manipulation, producing stable, coordinated whole-body motions driven purely by text.
1
0
0
CoDA also produces highly diverse whole-body manipulation motions from the same text prompt.
1
0
0
CoDA’s generated trajectories can be directly deployed to simulated humanoids, enabling interactive embodied control in physics-based environments.
1
0
0
Our coordinated optimization allows the character to walk and manipulate objects simultaneously, maintaining stability and precision across the entire movement.
1
0
0
CoDA can take hand-only manipulation datasets and automatically generate corresponding whole-body motions.
1
1
1
Our method supports object keyframe pose–driven generation, where users specify only sparse object poses and CoDA produces full-body, physically plausible motions that manipulate the articulated object accordingly.
1
0
0