liu_mingyu Profile Banner
Ming-Yu Liu Profile
Ming-Yu Liu

@liu_mingyu

Followers
9K
Following
2K
Media
81
Statuses
983

VP of Research at NVIDIA, Head of NVIDIA Deep Imagination Research Lab, IEEE Fellow.

Santa Clara, CA
Joined December 2015
Don't wanna be here? Send us removal request.
@liu_mingyu
Ming-Yu Liu
11 days
RT @NVIDIAAI: How do you teach an AI model to reason? 🤔. We are developing a set of tests that coach AI models to understand the physical….
0
18
0
@liu_mingyu
Ming-Yu Liu
11 days
RT @NVIDIAAIDev: Ranked #1 on @Meta's Physical Reasoning Leaderboard on @huggingface for a reason. 👏 🔥 🏆. Cosmos Reason enables robots and….
0
27
0
@liu_mingyu
Ming-Yu Liu
26 days
RT @huangjh_hjh: [1/N] 🎥 We've made available a powerful spatial AI tool named ViPE: Video Pose Engine, to recover camera motion, intrinsic….
0
98
0
@liu_mingyu
Ming-Yu Liu
27 days
The submissions portal for the NVIDIA 2026-2027 Graduate Fellowships is now open PHD students work on AI. Please apply!.
1
2
22
@liu_mingyu
Ming-Yu Liu
29 days
In Cosmos, we are hiring Cosmos World Foundation Model builders. If you are interestd in building large-scale video foundaiton model and multimodal LLM for Robots and cars, please send your CV to mingyul@nvidia.com. If you have experiences in large-scale diffusion models,.
@nvidiaomniverse
NVIDIA Omniverse
1 month
Kick off your #OpenUSD Day with a look into the future of robotics and autonomous vehicles. 🤖 . Join @liu_mingyu as he shares how #NVIDIACosmos world foundation models unlock prediction and reasoning for the next wave of robotics and autonomous vehicles. 📅Wednesday, 8/13 at
0
3
53
@liu_mingyu
Ming-Yu Liu
29 days
RT @nvidiaomniverse: Kick off your #OpenUSD Day with a look into the future of robotics and autonomous vehicles. 🤖 . Join @liu_mingyu as he….
0
8
0
@liu_mingyu
Ming-Yu Liu
1 month
Together with Aaron Lefohn and Sanja Fidler, we will give a special address at SIGGRAPH. Specifically, I will give an update on our vision and our current work in enabling Physical AI. Please join us.
0
3
40
@liu_mingyu
Ming-Yu Liu
2 months
RT @SeanKirmani: 🤖🌎 We are organizing a workshop on Robotics World Modeling at @corl_conf 2025!. We have an excellent group of speakers and….
0
37
0
@liu_mingyu
Ming-Yu Liu
2 months
RT @hanna_mao: We build Cosmos-Predict2 as a world foundation model for Physical AI builders — fully open and adaptable. Post-train it for….
0
74
0
@liu_mingyu
Ming-Yu Liu
3 months
Big congrats to @ericjang11 and the team on the 1X World Model release. Verification is an important part of producing production AI model. Given the diverse nature of the work environment, it makes a lot of sense to leverage a world model to help with policy evaluation.
@ericjang11
Eric Jang
3 months
We've made substantial progress on our action-conditioned video generation model, aka the "1X World Model", and we show that we can use it to evaluate robot policies instead of running experiments in the real world. Check it out!.
1
0
13
@liu_mingyu
Ming-Yu Liu
3 months
Check out our latest HF demo on 3D generation with part annotation.
@victormustar
Victor M
3 months
Nvidia cooked with PartPacker 3D Generation. A new method to create 3D objects from a single image, with each part separate and easy to edit 🔥. ⬇️ Demo available on Hugging Face
1
0
6
@liu_mingyu
Ming-Yu Liu
3 months
3D asset generation has advanced a lot in the past few years. Generating a holistic 3D asset is no longer a challenging problem. What's next for 3D generation?. We believe that generating a 3D asset with individual parts defined is the next frontier. With the parts, we can start.
@ashawkey3
kiui
3 months
Happy to share our work PartPacker:.We enable one-shot image-to-3D generation with any number of parts!. Project page: Demo: Code:.
0
1
22
@liu_mingyu
Ming-Yu Liu
3 months
RT @ashawkey3: Happy to share our work PartPacker:.We enable one-shot image-to-3D generation with any number of parts!. Project page: https….
0
16
0
@liu_mingyu
Ming-Yu Liu
3 months
RT @TsungYiLinCV: Generating 3D models with parts is a key step toward scalable, interactive simulation environments. Check out our work —….
Tweet card summary image
github.com
Efficient Part-level 3D Object Generation via Dual Volume Packing - NVlabs/PartPacker
0
14
0
@liu_mingyu
Ming-Yu Liu
3 months
For people looking for a diffusion-based video generator to finetune or post-train for their downstream physical AI applications, we just released our latest one. We have 2 models: 2B and 14B. 2B for fast prototyping and 14B for better quality. The license is fully open. Give it.
@qsh_zh
Qinsheng Zhang
3 months
🚀 Introducing Cosmos-Predict2!. Our most powerful open video foundation model for Physical AI. Cosmos-Predict2 significantly improves upon Predict1 in visual quality, prompt alignment, and motion dynamics—outperforming popular open-source video foundation models. It’s openly
2
10
46
@liu_mingyu
Ming-Yu Liu
3 months
We post-trained a reasoning model to reason whether a video is real or generated. It might be very useful as a critic to improve video generators. Take a look. @NVIDIAAI.
@mli0603
Max Zhaoshuo Li 李赵硕
3 months
Cosmos-Reason1 has exciting updates 💡.Now it understands physical reality — judging videos as real or fake! Check out the resources👇. Paper: Huggingface: Code: Project page: (1/n)
0
4
36
@liu_mingyu
Ming-Yu Liu
3 months
RT @mli0603: Cosmos-Reason1 has exciting updates 💡.Now it understands physical reality — judging videos as real or fake! Check out the reso….
0
32
0
@liu_mingyu
Ming-Yu Liu
4 months
Check out our new work on Direct Discriminative Optimization improving GenAI models.
@zkwthu
Kaiwen Zheng
4 months
1/đź’ˇNew paper from NVIDIA&Tsinghua @ICML2025 Spotlight!.Direct Discriminative Optimization (DDO) enables GAN-style finetuning of diffusion/autoregressive models without extra networks. SOTA achieved on ImageNet-512!.Website: Code:
Tweet media one
Tweet media two
Tweet media three
0
1
16
@liu_mingyu
Ming-Yu Liu
4 months
RT @zkwthu: 1/💡New paper from NVIDIA&Tsinghua @ICML2025 Spotlight!.Direct Discriminative Optimization (DDO) enables GAN-style finetuning of….
0
14
0