
Tsung-Yi Lin
@TsungYiLinCV
Followers
2K
Following
857
Media
17
Statuses
121
Principal Research Scientist @Nvidia | Ex-@Google Brain Team | Computer Vision & Machine Learning
Joined November 2018
RT @hanna_mao: We build Cosmos-Predict2 as a world foundation model for Physical AI builders — fully open and adaptable. Post-train it for….
0
70
0
RT @LearnOpenCV: NVIDIA’s Cosmos Reason1 is a family of Vision Language Models trained to understand the physical world and make decisions….
0
3
0
RT @victormustar: Nvidia cooked with PartPacker 3D Generation. A new method to create 3D objects from a single image, with each part separa….
0
281
0
Generating 3D models with parts is a key step toward scalable, interactive simulation environments. Check out our work — PartPacker — and the concurrent project, PartCrafter!". PartPacker: PartCrafter:
Happy to share our work PartPacker:.We enable one-shot image-to-3D generation with any number of parts!. Project page: Demo: Code:.
2
13
72
The physics meets vision workshop just started! Come joining us!
Join us on the 1st workshop on Vision Meets Physics: Synergizing Physical Simulation and Computer Vision at #CVPR2025 tomorrow!. Thought-provoking talks and expert insights from leading researchers that YOU CANNOT MISS!.📍104A.⏰ 8:45am June 12th.
0
4
30
RT @qsh_zh: 🚀 Introducing Cosmos-Predict2!. Our most powerful open video foundation model for Physical AI. Cosmos-Predict2 significantly im….
0
61
0
RT @FangyinWei: Join us on the 1st workshop on Vision Meets Physics: Synergizing Physical Simulation and Computer Vision at #CVPR2025 tomor….
0
5
0
RT @mli0603: Cosmos-Reason1 has exciting updates 💡.Now it understands physical reality — judging videos as real or fake! Check out the reso….
0
32
0
RT @KumbongHermann: Excited to be presenting our new work–HMAR: Efficient Hierarchical Masked Auto-Regressive Image Generation– at #CVPR202….
0
22
0
Future frames light the path to smarter actions! 🚀🤖 CoT-VLA leverages visual chain-of-thought reasoning to unlock large-scale video data and guide goal-driven robotics. #CVPR2025 #AI #Robotics.
Introduce CoT-VLA – Visual Chain-of-Thought reasoning for Robot Foundation Models! 🤖. By leveraging next-frame prediction as visual chain-of-thought reasoning, CoT-VLA uses future prediction to guide action generation and unlock large-scale video data for training. #CVPR2025
0
2
21
RT @yen_chen_lin: Video generation models exploded onto the scene in 2024, sparked by the release of Sora from OpenAI. I wrote a blog post….
0
109
0