Chuan Guo
@chuan_guo92603
Followers
227
Following
120
Media
6
Statuses
71
Research Scientist @Meta Reality Lab Previously RS @Snap 3D Vision, Motion Generation, Animation.
Redmond, WA
Joined July 2023
Please check out #Text2Interact. We study diverse, realistic๐บ๐Text-to-Interaction Generation via 1๏ธโฃ InterCompose, which synthesizes high-quality LLM data to mitigate data scarcity, and 2๏ธโฃ InterActor, an adaptive interaction model (conditioning and loss design) that boosts
arxiv.org
Modeling human-human interactions from text remains challenging because it requires not only realistic individual dynamics but also precise, text-consistent spatiotemporal coupling between agents....
๐ Introducing Text2Interact, our new pipeline capable of generating high-fidelity, diverse two-person interactions from text. (1/4) #AI #GenerativeAI #HumanMotion #Text2Interact
5
15
56
(1/4) [HOIDiNi] https://t.co/WPPzG4hrsd ๐งต: Diffusion models are great at generating free-form human motion but tend to break down when objects enter the scene. Humanโobject interaction demands millimetric precision, and even tiny errors cause hands to float or penetrate surfaces
1
8
25
Check out DIMO ( https://t.co/SLxNbnim3D, Highlight) ๐at Booth #410 at #ICCV2025 (Wed, morning session). From a single image, we distill video-model priors into a motion latent space to sample diverse 3D motions (neural keypoint trajectories + 3DGS).
0
8
57
Glad introduce our #ICCV2025 work Ponimator: Unfolding Interactive Pose for Versatile Human-human Interaction Animation Website:ย https://t.co/SsHdMUFVAG Paper: https://t.co/FkHzLQl152 Code: https://t.co/uAtxC2JwCa Poster:ย Wed 10.22, 10:30am #1109
@chuan_guo92603 @_JianWang_
6
20
138
Please feel free to drop by.
Glad introduce our #ICCV2025 work Ponimator: Unfolding Interactive Pose for Versatile Human-human Interaction Animation Website:ย https://t.co/SsHdMUFVAG Paper: https://t.co/FkHzLQl152 Code: https://t.co/uAtxC2JwCa Poster:ย Wed 10.22, 10:30am #1109
@chuan_guo92603 @_JianWang_
0
0
4
0
0
2
๐ฉโ๐ซ ๐ค Speakers & Organizers include experts from Meta, Nvidia, ETH, MPI, Northeastern University, University of Pennsylvania, MIT, and more. Particularly, we will be honored to have Libin Liu, @zhengyiluo, @XianghuiXie, @KorraweK as our speakers.
0
0
1
This tutorial cover the foundations and latest advances in topics include: 1๏ธโฃ Human motion generation basics 2๏ธโฃ Kinematic- and physics-based motion models 3๏ธโฃ Controllability of motion generation 4๏ธโฃ Human-object/scene interactions 5๏ธโฃ Co-speech gesture synthesis
0
1
3
๐ Weโll be hosting a Tutorial on "3D Human Motion Generation and Simulation" at ICCV 2026 in Honolulu, Hawaii! ๐บ ๐
Date: October 19, 2026 โฐ Time: 9:00โ16:00 (HST) ๐ More details & resources: https://t.co/S1Unz1oRdr
#AIGC #Simulation #robotics #ComputerVision #ICCV2025
4
14
45
๐งโ๐ง You donโt need a clean dataset to train a motion cleanup model. #StableMotion learns to fix corrupted motions directly from raw mocap data โ no handcrafted data pairs, no synthetic artifact augmentation. ๐ฅ Excited to share our latest work on motion cleanup, accepted to
6
98
759
SnapMoGen has been released! All mocap files are ready to download. Feel free to have a try!
3. ๐ Webpage: https://t.co/nb6AD6JR1t ๐ Paper: https://t.co/M2SdAV7HHQ ๐ป Code: https://t.co/CENPBMaTii ๐ Dataset: https://t.co/EFqDTcn8om Code & data release expected within a week!
0
1
7
1) ๐ From Sketch to Animation! Ever wished your hand-drawn storyboards could come to life? ๐จ Meet Sketch2Anim โ our framework that transforms sketches into expressive 3D animations. Presenting at #SIGGRAPH2025 ๐จ๐ฆ๐ ๐ Project: https://t.co/QDvq7IRg13
1
6
17
Excited to share our latest work on ๐งspatial audio-driven human motion generation. We aim to tackle a largely underexplored yet important problem of enabling virtual humans to move naturally in response to spatial audioโcapturing not just what is heard, but also where the sound
1
22
121
3. ๐ Webpage: https://t.co/nb6AD6JR1t ๐ Paper: https://t.co/M2SdAV7HHQ ๐ป Code: https://t.co/CENPBMaTii ๐ Dataset: https://t.co/EFqDTcn8om Code & data release expected within a week!
arxiv.org
Text-to-motion generation has experienced remarkable progress in recent years. However, current approaches remain limited to synthesizing motion from short or general text prompts, primarily due...
0
1
2
2. Built on these expressive text annotations, our models achieves: โ
Finer control โ
Broader prompt coverage (via LLMs) โ
More realistic, coherent motion This project is led by Snap Research Team in NYC, Bing Zhou, Jian (James) Wang, Inwoo Hwang.
1
0
1
1. It features: ๐บ 20K high-quality mocap clips (44 hours), covering various kinds of actions. ๐ 122K richly detailed descriptions (avg. 48 words vs. 12 in HumanML3D)
1
0
2
๐ฅ ๐ฅ Want better motion generation? Start with better text. #SnapMoGen โ a novel text-to-motion dataset built for expressive control and generalization. #AIGC #3DAnimation #AI #ComputerVision #AIResearch #3DMotion #MotionGeneration #TextToMotion
1
6
25
1/ Can we teach a motion model to "dance like a chicken" Or better: Can LoRA help motion diffusion models learn expressive, editable styles without forgetting how to move? Led by @HSawdayee, @chuan_guo92603, we explore this in our latest work. ๐ฅ https://t.co/VeLnlNkHYO ๐งต๐
4
28
128
What a fantastic workshop! ๐ Huge thanks to our amazing speakers, insightful poster presenters, dedicated reviewers, and the engaged audience who stayed with us throughout the day. We couldn't have asked for a better community! see you next time! ๐ #CVPR2025 #HumanMotion
0
6
26