
Joel Jang
@jang_yoel
Followers
2K
Following
2K
Media
50
Statuses
365
Senior Research Scientist @nvidiaai GEAR Lab working on Project GR00T. Leading video world model & latent actions.
Seattle, US
Joined March 2021
Introducing ππ«πππ¦πππ§!. We got humanoid robots to perform totally new π£ππππ in new environments through video world models. We believe video world models will solve the data problem in robotics. Bringing the paradigm of scaling human hours to GPU hours. Quick π§΅
9
75
381
RT @DrJimFan: World modeling for robotics is incredibly hard because (1) control of humanoid robots & 5-finger hands is wayyy harder thanβ¦.
0
180
0
RT @TheHumanoidHub: A humanoid robot policy trained solely on synthetic data generated by a world model. Research Scientist Joel Jang presβ¦.
0
43
0
RT @DrJimFan: I've been a bit quiet on X recently. The past year has been a transformational experience. Grok-4 and Kimi K2 are awesome, buβ¦.
0
333
0
RT @agibotworld: Compete for a $560,000 Prize Pool at IROS 2025 AgiBot World Challenge! π°.The AgiBot World Challenge β Manipulation Track iβ¦.
0
11
0
Check out Cosmos-Predict2, a new SOTA video world model trained specifically for Physical AI (powering GR00T Dreams & DreamGen)!.
We build Cosmos-Predict2 as a world foundation model for Physical AI builders β fully open and adaptable. Post-train it for specialized tasks or different output types. Available in multiple sizes, resolutions, and frame rates. π· Watch the repo walkthrough
0
5
44
π GR00T Dreams code is live! NVIDIA GEAR Lab's open-source solution for robotics data via video world models. Fine-tune on any robot, generate 'dreams', extract actions with IDM, and train visuomotor policies with LeRobot datasets (GR00T N1.5, SmolVLA).
github.com
Nvidia GEAR Lab's initiative to solve the robotics data problem using world models - NVIDIA/GR00T-Dreams
Introducing ππ«πππ¦πππ§!. We got humanoid robots to perform totally new π£ππππ in new environments through video world models. We believe video world models will solve the data problem in robotics. Bringing the paradigm of scaling human hours to GPU hours. Quick π§΅
6
43
150
RT @youliangtan: How we improve VLA generalization? π€ Last week we upgraded #NVIDIA GR00T N1.5 with minor VLM tweaks, FLARE, and richer datβ¦.
0
23
0
RT @qsh_zh: π Introducing Cosmos-Predict2!. Our most powerful open video foundation model for Physical AI. Cosmos-Predict2 significantly imβ¦.
0
62
0
RT @chris_j_paxton: Assuming that we need ~2 trillion tokens to get to a robot GPT, how can we get there? I went through a few scenarios loβ¦.
0
35
0
RT @AiYiyangZ: π₯ ReAgent-V Released! π₯. A unified video framework with reflection and reward-driven optimization. β¨ Real-time self-correctβ¦.
0
17
0
Giving a talk about GR00T N1, GR00T N1.5, and GR00T Dreams in NVIDIA GTC Paris 06.11 2PM - 2:45PM CEST. If you are at Vivatech in Paris, please stop by the "An Introduction to Humanoid Robotics" Session!.
Are you curious about #humanoidrobotics? . Join our experts at #GTCParis for a deep dive into the #NVIDIAIsaac GR00T platform and its four pillars:. π§ Robot foundation models for cognition and control.π Simulation frameworks built on @nvidiaomniverse and #NVIDIACosmos.π Data
1
6
63
RT @ruijie_zheng12: Representation also matters for VLA models! Introducing FLARE: Robot Learning with Implicit World Modeling. With futureβ¦.
0
22
0
RT @adcock_brett: Nvidia also announced DreamGen, a new engine that scales robot learning with digital dreams. It produces large volumes ofβ¦.
0
7
0
RT @TheHumanoidHub: NVIDIA has published a paper on DREAMGEN β a powerful 4-step pipeline for generating synthetic data for humanoids thatβ¦.
0
32
0
RT @snasiriany: Itβs not a matter of if, itβs a matter of when, video models and world models are going to be a central tool for building rβ¦.
0
1
0
RT @luke_ch_song: Getting robot data is difficult for those who donβt have the resources, and glad to see @NVIDIARobotics is offering an APβ¦.
0
1
0