Yiming Xie Profile
Yiming Xie

@YimingXie4

Followers
681
Following
2K
Media
9
Statuses
63

CS PhD student @khourycollege | B.E. @ZJU_China

Joined March 2020
Don't wanna be here? Send us removal request.
@LingjieLiu1
Lingjie Liu
16 days
🚀 Happening now in Room 320 at #ICCV2026! Join our full-day tutorial on 3D Human Motion Generation & Simulation 🔗 https://t.co/92grSRF8f3
@chuan_guo92603
Chuan Guo
1 month
🚀 We’ll be hosting a Tutorial on "3D Human Motion Generation and Simulation" at ICCV 2026 in Honolulu, Hawaii! 🌺 📅 Date: October 19, 2026 ⏰ Time: 9:00–16:00 (HST) 🔗 More details & resources: https://t.co/S1Unz1oRdr #AIGC #Simulation #robotics #ComputerVision #ICCV2025
6
9
36
@frankzydou
Zhiyang (Frank) Dou
16 days
Happening now at #ICCV2025 in Hawaii! ✨ Join our tutorial on 3D Human Motion Generation & Simulation! 📆 Today, Oct 19, 9am–5pm 🔗
@frankzydou
Zhiyang (Frank) Dou
1 month
🚀 We’ll be hosting a Tutorial on "3D Human Motion Generation and Simulation" at ICCV 2025 in Honolulu, Hawaii! 🌺 🏃‍♀️🏃‍♂️🧗🏊🚴🕺🤖 📅 Date: October 19, 2025 ⏰ Time: 9:00–16:00 (HST) This tutorial brings together leading researchers to cover the foundations and latest advances
0
1
6
@frankzydou
Zhiyang (Frank) Dou
18 days
🚀If you’ll be at ICCV 2025, please join the “3D Human Motion Generation and Simulation” tutorial on Sunday, October 19, 2025, 9:00–17:00 (HST), in Room 320. #ICCV #ICCV2025 #Humanmotion #Motion #Animation #Simulation
@chuan_guo92603
Chuan Guo
1 month
🚀 We’ll be hosting a Tutorial on "3D Human Motion Generation and Simulation" at ICCV 2026 in Honolulu, Hawaii! 🌺 📅 Date: October 19, 2026 ⏰ Time: 9:00–16:00 (HST) 🔗 More details & resources: https://t.co/S1Unz1oRdr #AIGC #Simulation #robotics #ComputerVision #ICCV2025
0
4
15
@zhengyiluo
Zhengyi “Zen” Luo
18 days
Is Motion Tracking All You Need for Humanoid Control? Come to my tutorial about physics-based humanoid control in simulation and the real world! I will share our latest and greatest results. 🏖️🏖️🏖️
1
10
102
@chuan_guo92603
Chuan Guo
1 month
🚀 We’ll be hosting a Tutorial on "3D Human Motion Generation and Simulation" at ICCV 2026 in Honolulu, Hawaii! 🌺 📅 Date: October 19, 2026 ⏰ Time: 9:00–16:00 (HST) 🔗 More details & resources: https://t.co/S1Unz1oRdr #AIGC #Simulation #robotics #ComputerVision #ICCV2025
4
14
45
@LeiZhong_
Lei Zhong
3 months
1) 🚀 From Sketch to Animation! Ever wished your hand-drawn storyboards could come to life? 🎨 Meet Sketch2Anim — our framework that transforms sketches into expressive 3D animations. Presenting at #SIGGRAPH2025 🇨🇦🎉 🔗 Project: https://t.co/QDvq7IRg13
1
6
17
@Fangrui_Zhu
Fangrui Zhu
5 months
🌟LMMs e.g. GPT‑o3 can solve spatial tasks from RGBD videos—with strong perception and prompting. 🚀We introduce Struct2D, a method that boosts spatial reasoning in open-source models. Even Qwen-VL-3B + Struct2D outperforms existing 7B models. 📜arXiv: https://t.co/lomJaaF83C
1
5
17
@HuaizuJiang
Huaizu Jiang
5 months
We revisit the representation in human motion generation, showing that absolute joint coordinates outperform the de facto kinematic-aware, local-relative, and redundant choice. Benefits include: ✅ Easy motion control/editing ✅ Direct generation of SMPL mesh vertices in motion
1
1
12
@StabilityAI
Stability AI
6 months
We’ve upgraded Stable Video Diffusion 4D to Stable Video 4D 2.0 (SV4D 2.0), improving the quality of 4D outputs generated from a single object-centric video. While 3D provides a static view of an object’s shape and size; 4D extends this by including time, showing how the object
8
59
273
@YimingXie4
Yiming Xie
6 months
🎉Come check out our poster #ICLR2025! SV4D: Dynamic 3D Content Generation with Multi-Frame and Multi-View Consistency 🗓️ Thursday, April 24 ⏰ 3:00 PM – 5:30 PM 📍 Hall 3 + Hall 2B, Poster #112 🧑‍💻 Presented by @chunhanyao @HuaizuJiang 🔗
@StabilityAI
Stability AI
1 year
We are pleased to announce the availability of Stable Video 4D, our very first video-to-video generation model that allows users to upload a single video and receive dynamic novel-view videos of eight new angles, delivering a new level of versatility and creativity. In
0
1
20
@StabilityAI
Stability AI
8 months
Introducing Stable Virtual Camera: This multi-view diffusion model transforms 2D images into immersive 3D videos with realistic depth and perspective—without complex reconstruction or scene-specific optimization.
50
408
2K
@Hongyu_Lii
Hongyu Li
8 months
Can we robustly track an object’s 6D pose in contact-rich, occluded scenarios? Yes! Our solution, V-HOP, fuses vision and touch through a visuo-haptic transformer for precise, real-time tracking. arXiv: https://t.co/gz3yo4a7Ce Project: https://t.co/nvajek3CL6
6
28
167
@DaiWenxun
wxDai
11 months
🔥Today, we announce the MotionLCM-V2, a state-of-the-art text-to-motion model in motion generation quality, motion-text alignment capability, and inference speed. ✍️Blogpost: https://t.co/NQ38yiYpxD 💻Code: https://t.co/RcyxyAnThD
1
8
13
@HuaizuJiang
Huaizu Jiang
1 year
#ECCV2024 We've tamed human motion diffusion models to generate stylized motions. Check out our work SMooDi: Stylized Motion Diffusion Model. One step closer to high-fidelity human motion generation. Paper: https://t.co/0hwoqSqZ2G Code: https://t.co/2bn6Bv5E9D
1
9
59
@StabilityAI
Stability AI
1 year
We are pleased to announce the availability of Stable Video 4D, our very first video-to-video generation model that allows users to upload a single video and receive dynamic novel-view videos of eight new angles, delivering a new level of versatility and creativity. In
46
236
1K
@dreamingtulpa
Dreaming Tulpa 🥓👑
1 year
Want to see what your next flat, house or film set could look like in 3D? HouseCrafter can lift a floorplan into a complete 3D indoor scene. https://t.co/RERu6MaM3G
11
62
262
@HuaizuJiang
Huaizu Jiang
1 year
Excited to share our recent work HouseCrafter, which can lift a floorplan into a complete large 3D indoor scene (e.g. a house). Our key insight is to adapt a 2D diffusion model to generate consistent multi-view RGB-D images for reconstruction. Paper: https://t.co/4Ppg5SjCYN
0
6
55
@YimingXie4
Yiming Xie
1 year
I will present OmniControl ( https://t.co/qVOVMBOdCf) at #ICLR2024. ⏰: Tuesday (May 7) 4:30 p.m. (Halle B #54) Come say hi!
Tweet card summary image
arxiv.org
We present a novel approach named OmniControl for incorporating flexible spatial control signals into a text-conditioned human motion generation model based on the diffusion process. Unlike...
@YimingXie4
Yiming Xie
2 years
Excited to share 🔥OmniControl🔥 for incorporating 💭flexible spatial control signals💭 into a text-conditioned human motion generation. The generated motions are realistic, coherent, and consistent with the spatial constraints. -Project page: https://t.co/Q0RhwUP7jz
0
8
54
@YimingXie4
Yiming Xie
2 years
Glad to be a recipient of the 2024 Apple Scholars in AI/ML PhD fellowship! Thanks Apple and all my mentors and collaborators! https://t.co/laLLT2Ji8B
Tweet card summary image
machinelearning.apple.com
Apple is proud to announce the 2024 recipients of the Apple Scholars in AIML PhD fellowship.
11
4
112
@YimingXie4
Yiming Xie
2 years
Our work OmniControl is accepted in #ICLR2024! Incorporating flexible spatial control signals into a text-conditioned human motion generation model. Project: https://t.co/Q0RhwUP7jz Code:
Tweet card summary image
github.com
OmniControl: Control Any Joint at Any Time for Human Motion Generation, ICLR 2024 - neu-vi/OmniControl
@YimingXie4
Yiming Xie
2 years
Excited to share 🔥OmniControl🔥 for incorporating 💭flexible spatial control signals💭 into a text-conditioned human motion generation. The generated motions are realistic, coherent, and consistent with the spatial constraints. -Project page: https://t.co/Q0RhwUP7jz
0
8
80