Nathan Ratliff
@robot_trainer
Followers
2K
Following
2K
Media
39
Statuses
525
Director of Robotic Systems @NVIDIA. Isaac Cortex, cobots, geometric methods; PhD CMU, research Max Planck, TTI-C, co-founder Lula Robotics, eng Google, Amazon
Seattle, WA
Joined December 2010
isaac lab is an enabler! without it's tiled rendering dextrah-rgb wouldn't be possible. sim2real rl is the holy grail of robotics, and isaac lab brings together the perfect combination of technologies to legitimately start breaking that into a reality.
Our whitepaper on Isaac Lab is out! Isaac Lab is a natural successor of Isaac Gym that pioneered GPU-accelerated simulation for robotics. It subsumes all the features of Gym and provides the latest advances in simulation technology to robotics researchers. It also supports
0
2
21
Our latest work performs sim2real dexterous grasping using end-to-end depth RL.
14
48
398
our dextrah-rgb code is out! that includes our vectorized geometric fabrics library we've been using for safe control of the robot.
Happy to announce that we have finally open sourced the code for DextrAH-RGB along with Geometric Fabrics: https://t.co/v7QPGgtyDi
https://t.co/fHyvvKU9IA
1
0
19
this is really cool. i've always thought learning-based methods were the right approach to global motion generation. nice work! (and all the demos! super robust and general system)
Ever wish a robot could just move to any goal in any environment—avoiding all collisions and reacting in real time? 🚀Excited to share our #CoRL2025 paper, Deep Reactive Policy (DRP), a learning-based motion planner that navigates complex scenes with moving obstacles—directly
2
4
17
Regardless of whether you plan to use them in applications, everyone should learn about Gaussian processes, and Bayesian methods. They provide a foundation for reasoning about model construction and all sorts of deep learning behaviour that would otherwise appear mysterious.
17
20
398
hehehe
1
0
1
make sure you expert makes mistakes and has to explore. there's been a lot of work around ensuring demonstrators have the same information as the robot, but this works shows it's super useful for the demonstrator to have less! super interesting.
Want robot imitation learning to generalize to new tasks? Blindfold your human demonstrator! Best robotics paper at EXAIT Workshop #ICML2025
https://t.co/6XT0hdl00d Wait, why does this make sense? Read below!
0
0
3
andrew's explanations are always lucid and insightful. recommend taking a look. deep nets have soft (but flexible) inductive biases preferring simple explanations, and they're able to characterize that rigorously pulling out some decades old theory. super cool.
Excited to be presenting my paper "Deep Learning is Not So Mysterious or Different" tomorrow at ICML, 11 am - 1:30 pm, East Exhibition Hall A-B, E-500. I made a little video overview as part of the ICML process (viewable from Chrome):
0
0
2
Reach out to one of us (me, @ankurhandos , Karl Van Wyk, @ritvik_singh9) or one of our interns (@JasonJZLiu, @arthurallshire ) if you want to learn more, or just want to chat about research. We’d love to hear from you!
0
0
5
We’re now scaling massively and deep diving into direct-from-perception RL. The key to that? Global optimization via DexPBT. It works miracles. Sim2real to come, but all of the above was motivated by the goal of getting these early results on real robots. https://t.co/TarLgqJ3ok
population based training (PBT) is underrated for pushing scale and getting better results in GPU-accelerated RL. Our new work DexPBT lead by @petrenko_ai shows how it can be used to train highly dexterous hand-arm manipulation in up to 46 DoF systems. https://t.co/snWtrKkICP
1
1
5
More on those geometric fabrics. This is our OG continuous control foundation—continuous time; exploits the mathematical structure of second-order differential equations of paths. Has saved our hardware countless times. It's how we deploy fast. https://t.co/puzl4gmbfE
New work on vectorizing geometric fabric controllers for RL workflows at scale. DeXtreme: Fabric Guided Policies (FGP). Policies are hard on hardware. We need low-level controllers at deployment, which means we need them during training. FGPs increase hardware lifetime, enable
1
0
2
A first step was DextrAH-G, which consumed depth images. We deploy to real quickly and early to test the waters and iterate on the training approach. That’s all possible due to SOTA vectorized low-level controllers keeping the robot safe (fabrics). https://t.co/VYyFBwtBNP
Exciting new work! Fast, robust, reactive, direct-from-sensor grasp-anything policies. RL really works, and it’s going to transform the entire robotics economy. DextrAH-G: Dexterous Arm-Hand Grasping https://t.co/z2a2YOkoI1
1
0
2
Also, see @ritvik_singh9’s overview of the work on Atlas with Boston Dynamics. It links to his own thread on DextrAH-RGB in the second post, covering more technical details (he's first author). https://t.co/A5hi2CEtET
Over the past few months we've been working with Boston Dynamics on end-to-end dexterous manipulation for the EAtlas:
1
0
2
The above is a collaboration with Boston Dynamics building off our latest work on DextrAH-RGB. Here’s an overview thread on DextrAH-RGB: https://t.co/6leydFy5GX We train visuomotor grasp-anything policies in simulation using a distillation pipeline and massive randomization.
Next step in dynamic dexterous grasping from NVIDIA: DextrAH-RGB! No more depth. We’re now consuming RGB stereo pairs, and the resulting perceptual system is much more robust. Trained entirely in sim (IsaacLab), leveraging fast tiled rendering, and deployed zero-shot to real.
1
0
5
The Dex team at NVIDIA is defining the bleeding edge of sim2real dexterity. Take a look below 🧵 There's a lot happening at NVIDIA in robotics, and we’re looking for good people! Reach out if you're interested. We have some big things brewing (and scaling :)
4
49
340
science
TRI's latest Large Behavior Model (LBM) paper landed on arxiv last night! Check out our project website: https://t.co/n0qmDRivRH One of our main goals for this paper was to put out a very careful and thorough study on the topic to help people understand the state of the
0
0
4
good points. openai created llms at scale well before chatgpt, but chatgpt made them accessible and that was arguably more impactful. all the recent models have been technically spectacular, but making them universally accessible may again be felt (significantly) more strongly by
A brief overview of GPT-5 GPT-5 could disappoint some and amaze many. It's a strange contradiction, but I'll try to explain. For “hardcore” users, GPT-5 will be a bit of a disappointment, if the rumors are to be believed. Rumor has it that Sam Altman is not particularly
1
0
0