Yiming Xie
            
            @YimingXie4
Followers
                681
              Following
                2K
              Media
                9
              Statuses
                63
              CS PhD student @khourycollege | B.E. @ZJU_China
              
              Joined March 2020
            
            
           🚀 Happening now in Room 320 at #ICCV2026! Join our full-day tutorial on 3D Human Motion Generation & Simulation 🔗  https://t.co/92grSRF8f3 
          
           🚀 We’ll be hosting a Tutorial on "3D Human Motion Generation and Simulation" at ICCV 2026 in Honolulu, Hawaii! 🌺 📅 Date: October 19, 2026 ⏰ Time: 9:00–16:00 (HST) 🔗 More details & resources:  https://t.co/S1Unz1oRdr 
              #AIGC #Simulation #robotics #ComputerVision #ICCV2025
            
            
                
                6
              
              
                
                9
              
              
                
                36
              
             Happening now at #ICCV2025 in Hawaii! ✨ Join our tutorial on 3D Human Motion Generation & Simulation! 📆 Today, Oct 19, 9am–5pm 🔗 
           🚀 We’ll be hosting a Tutorial on "3D Human Motion Generation and Simulation" at ICCV 2025 in Honolulu, Hawaii! 🌺 🏃♀️🏃♂️🧗🏊🚴🕺🤖 📅 Date: October 19, 2025 ⏰ Time: 9:00–16:00 (HST) This tutorial brings together leading researchers to cover the foundations and latest advances 
            
                
                0
              
              
                
                1
              
              
                
                6
              
             🚀If you’ll be at ICCV 2025, please join the “3D Human Motion Generation and Simulation” tutorial on Sunday, October 19, 2025, 9:00–17:00 (HST), in Room 320. #ICCV #ICCV2025 #Humanmotion #Motion #Animation #Simulation
          
           🚀 We’ll be hosting a Tutorial on "3D Human Motion Generation and Simulation" at ICCV 2026 in Honolulu, Hawaii! 🌺 📅 Date: October 19, 2026 ⏰ Time: 9:00–16:00 (HST) 🔗 More details & resources:  https://t.co/S1Unz1oRdr 
              #AIGC #Simulation #robotics #ComputerVision #ICCV2025
            
            
                
                0
              
              
                
                4
              
              
                
                15
              
             Is Motion Tracking All You Need for Humanoid Control? Come to my tutorial about physics-based humanoid control in simulation and the real world! I will share our latest and greatest results. 🏖️🏖️🏖️ 
          
                
                1
              
              
                
                10
              
              
                
                102
              
             🚀 We’ll be hosting a Tutorial on "3D Human Motion Generation and Simulation" at ICCV 2026 in Honolulu, Hawaii! 🌺 📅 Date: October 19, 2026 ⏰ Time: 9:00–16:00 (HST) 🔗 More details & resources:  https://t.co/S1Unz1oRdr 
            #AIGC #Simulation #robotics #ComputerVision #ICCV2025
          
          
                
                4
              
              
                
                14
              
              
                
                45
              
             1) 🚀 From Sketch to Animation! Ever wished your hand-drawn storyboards could come to life? 🎨 Meet Sketch2Anim — our framework that transforms sketches into expressive 3D animations. Presenting at #SIGGRAPH2025 🇨🇦🎉 🔗 Project:  https://t.co/QDvq7IRg13 
          
          
                
                1
              
              
                
                6
              
              
                
                17
              
             🌟LMMs e.g. GPT‑o3 can solve spatial tasks from RGBD videos—with strong perception and prompting. 🚀We introduce Struct2D, a method that boosts spatial reasoning in open-source models. Even Qwen-VL-3B + Struct2D outperforms existing 7B models. 📜arXiv:  https://t.co/lomJaaF83C 
          
          
                
                1
              
              
                
                5
              
              
                
                17
              
             We revisit the representation in human motion generation, showing that absolute joint coordinates outperform the de facto kinematic-aware, local-relative, and redundant choice. Benefits include: ✅ Easy motion control/editing ✅ Direct generation of SMPL mesh vertices in motion 
          
                
                1
              
              
                
                1
              
              
                
                12
              
             We’ve upgraded Stable Video Diffusion 4D to Stable Video 4D 2.0 (SV4D 2.0), improving the quality of 4D outputs generated from a single object-centric video. While 3D provides a static view of an object’s shape and size; 4D extends this by including time, showing how the object 
          
                
                8
              
              
                
                59
              
              
                
                273
              
             🎉Come check out our poster #ICLR2025! SV4D: Dynamic 3D Content Generation with Multi-Frame and Multi-View Consistency 🗓️ Thursday, April 24 ⏰ 3:00 PM – 5:30 PM 📍 Hall 3 + Hall 2B, Poster #112 🧑💻 Presented by @chunhanyao @HuaizuJiang 🔗 
           We are pleased to announce the availability of Stable Video 4D, our very first video-to-video generation model that allows users to upload a single video and receive dynamic novel-view videos of eight new angles, delivering a new level of versatility and creativity. In 
            
                
                0
              
              
                
                1
              
              
                
                20
              
             Introducing Stable Virtual Camera: This multi-view diffusion model transforms 2D images into immersive 3D videos with realistic depth and perspective—without complex reconstruction or scene-specific optimization. 
          
                
                50
              
              
                
                408
              
              
                
                2K
              
             Can we robustly track an object’s 6D pose in contact-rich, occluded scenarios? Yes! Our solution, V-HOP, fuses vision and touch through a visuo-haptic transformer for precise, real-time tracking. arXiv:  https://t.co/gz3yo4a7Ce  Project:  https://t.co/nvajek3CL6 
          
          
                
                6
              
              
                
                28
              
              
                
                167
              
             🔥Today, we announce the MotionLCM-V2, a state-of-the-art text-to-motion model in motion generation quality, motion-text alignment capability, and inference speed. ✍️Blogpost:  https://t.co/NQ38yiYpxD  💻Code:  https://t.co/RcyxyAnThD 
          
          
                
                1
              
              
                
                8
              
              
                
                13
              
            
            #ECCV2024 We've tamed human motion diffusion models to generate stylized motions. Check out our work SMooDi: Stylized Motion Diffusion Model. One step closer to high-fidelity human motion generation. Paper:  https://t.co/0hwoqSqZ2G  Code:  https://t.co/2bn6Bv5E9D 
          
          
                
                1
              
              
                
                9
              
              
                
                59
              
             We are pleased to announce the availability of Stable Video 4D, our very first video-to-video generation model that allows users to upload a single video and receive dynamic novel-view videos of eight new angles, delivering a new level of versatility and creativity. In 
          
                
                46
              
              
                
                236
              
              
                
                1K
              
             Want to see what your next flat, house or film set could look like in 3D? HouseCrafter can lift a floorplan into a complete 3D indoor scene.  https://t.co/RERu6MaM3G 
          
          
                
                11
              
              
                
                62
              
              
                
                262
              
             Excited to share our recent work HouseCrafter, which can lift a floorplan into a complete large 3D indoor scene (e.g. a house). Our key insight is to adapt a 2D diffusion model to generate consistent multi-view RGB-D images for reconstruction. Paper:  https://t.co/4Ppg5SjCYN 
          
          
                
                0
              
              
                
                6
              
              
                
                55
              
             I will present OmniControl (  https://t.co/qVOVMBOdCf)  at #ICLR2024. ⏰: Tuesday (May 7) 4:30 p.m. (Halle B #54) Come say hi! 
          
            
            arxiv.org
              We present a novel approach named OmniControl for incorporating flexible spatial control signals into a text-conditioned human motion generation model based on the diffusion process. Unlike...
             Excited to share 🔥OmniControl🔥 for incorporating 💭flexible spatial control signals💭 into a text-conditioned human motion generation. The generated motions are realistic, coherent, and consistent with the spatial constraints. -Project page:  https://t.co/Q0RhwUP7jz 
            
            
                
                0
              
              
                
                8
              
              
                
                54
              
             Glad to be a recipient of the 2024 Apple Scholars in AI/ML PhD fellowship! Thanks Apple and all my mentors and collaborators!  https://t.co/laLLT2Ji8B 
          
          
            
            machinelearning.apple.com
              Apple is proud to announce the 2024 recipients of the Apple Scholars in AIML PhD fellowship.
            
                
                11
              
              
                
                4
              
              
                
                112
              
             Our work OmniControl is accepted in #ICLR2024! Incorporating flexible spatial control signals into a text-conditioned human motion generation model. Project:  https://t.co/Q0RhwUP7jz  Code: 
          
            
            github.com
              OmniControl: Control Any Joint at Any Time for Human Motion Generation, ICLR 2024 - neu-vi/OmniControl
             Excited to share 🔥OmniControl🔥 for incorporating 💭flexible spatial control signals💭 into a text-conditioned human motion generation. The generated motions are realistic, coherent, and consistent with the spatial constraints. -Project page:  https://t.co/Q0RhwUP7jz 
            
            
                
                0
              
              
                
                8
              
              
                
                80