robot_trainer Profile Banner
Nathan Ratliff Profile
Nathan Ratliff

@robot_trainer

Followers
2K
Following
2K
Media
39
Statuses
525

Director of Robotic Systems @NVIDIA. Isaac Cortex, cobots, geometric methods; PhD CMU, research Max Planck, TTI-C, co-founder Lula Robotics, eng Google, Amazon

Seattle, WA
Joined December 2010
Don't wanna be here? Send us removal request.
@arthurallshire
Arthur Allshire
1 month
super excited that we won the best student paper for videomimic! unfortunately i was packing in the hotel during the awards ceremony 😅
@akanazawa
Angjoo Kanazawa
1 month
Congratulations to the videomimic team for winning the best student paper award at CoRL 2025 🥹🎉 Grateful to the CoRL community for the recognition!
21
11
147
@robot_trainer
Nathan Ratliff
2 months
isaac lab is an enabler! without it's tiled rendering dextrah-rgb wouldn't be possible. sim2real rl is the holy grail of robotics, and isaac lab brings together the perfect combination of technologies to legitimately start breaking that into a reality.
@ankurhandos
Ankur Handa
2 months
Our whitepaper on Isaac Lab is out! Isaac Lab is a natural successor of Isaac Gym that pioneered GPU-accelerated simulation for robotics. It subsumes all the features of Gym and provides the latest advances in simulation technology to robotics researchers. It also supports
0
2
21
@ritvik_singh9
Ritvik Singh
2 months
Our latest work performs sim2real dexterous grasping using end-to-end depth RL.
14
48
398
@robot_trainer
Nathan Ratliff
2 months
our dextrah-rgb code is out! that includes our vectorized geometric fabrics library we've been using for safe control of the robot.
@ritvik_singh9
Ritvik Singh
2 months
Happy to announce that we have finally open sourced the code for DextrAH-RGB along with Geometric Fabrics: https://t.co/v7QPGgtyDi https://t.co/fHyvvKU9IA
1
0
19
@robot_trainer
Nathan Ratliff
2 months
this is really cool. i've always thought learning-based methods were the right approach to global motion generation. nice work! (and all the demos! super robust and general system)
@JasonJZLiu
Jason Liu
2 months
Ever wish a robot could just move to any goal in any environment—avoiding all collisions and reacting in real time? 🚀Excited to share our #CoRL2025 paper, Deep Reactive Policy (DRP), a learning-based motion planner that navigates complex scenes with moving obstacles—directly
2
4
17
@andrewgwils
Andrew Gordon Wilson
3 months
Regardless of whether you plan to use them in applications, everyone should learn about Gaussian processes, and Bayesian methods. They provide a foundation for reasoning about model construction and all sorts of deep learning behaviour that would otherwise appear mysterious.
17
20
398
@robot_trainer
Nathan Ratliff
4 months
hehehe
@TheHumanoidHub
The Humanoid Hub
4 months
Unitree G1 had a meltdown mid-performance.
1
0
1
@robot_trainer
Nathan Ratliff
4 months
😮🫤🤪🫥🤖
@rohanpaul_ai
Rohan Paul
4 months
MASSIVE claim in this paper. AI Architectural breakthroughs can be scaled computationally, transforming research progress from a human-limited to a computation-scalable process. So it turns architecture discovery into a compute‑bound process, opening a path to
0
0
2
@robot_trainer
Nathan Ratliff
4 months
make sure you expert makes mistakes and has to explore. there's been a lot of work around ensuring demonstrators have the same information as the robot, but this works shows it's super useful for the demonstrator to have less! super interesting.
@AvivTamar1
Aviv Tamar
4 months
Want robot imitation learning to generalize to new tasks? Blindfold your human demonstrator! Best robotics paper at EXAIT Workshop #ICML2025 https://t.co/6XT0hdl00d Wait, why does this make sense? Read below!
0
0
3
@robot_trainer
Nathan Ratliff
4 months
andrew's explanations are always lucid and insightful. recommend taking a look. deep nets have soft (but flexible) inductive biases preferring simple explanations, and they're able to characterize that rigorously pulling out some decades old theory. super cool.
@andrewgwils
Andrew Gordon Wilson
4 months
Excited to be presenting my paper "Deep Learning is Not So Mysterious or Different" tomorrow at ICML, 11 am - 1:30 pm, East Exhibition Hall A-B, E-500. I made a little video overview as part of the ICML process (viewable from Chrome):
0
0
2
@robot_trainer
Nathan Ratliff
4 months
hehe
@ChrSzegedy
Christian Szegedy
4 months
A melon-sized cherry on top :)
0
0
0
@robot_trainer
Nathan Ratliff
4 months
Reach out to one of us (me, @ankurhandos , Karl Van Wyk, @ritvik_singh9) or one of our interns (@JasonJZLiu, @arthurallshire ) if you want to learn more, or just want to chat about research. We’d love to hear from you!
0
0
5
@robot_trainer
Nathan Ratliff
4 months
We’re now scaling massively and deep diving into direct-from-perception RL. The key to that? Global optimization via DexPBT. It works miracles. Sim2real to come, but all of the above was motivated by the goal of getting these early results on real robots. https://t.co/TarLgqJ3ok
@arthurallshire
Arthur Allshire
2 years
population based training (PBT) is underrated for pushing scale and getting better results in GPU-accelerated RL. Our new work DexPBT lead by @petrenko_ai shows how it can be used to train highly dexterous hand-arm manipulation in up to 46 DoF systems. https://t.co/snWtrKkICP
1
1
5
@robot_trainer
Nathan Ratliff
4 months
More on those geometric fabrics. This is our OG continuous control foundation—continuous time; exploits the mathematical structure of second-order differential equations of paths. Has saved our hardware countless times. It's how we deploy fast. https://t.co/puzl4gmbfE
@robot_trainer
Nathan Ratliff
2 years
New work on vectorizing geometric fabric controllers for RL workflows at scale. DeXtreme: Fabric Guided Policies (FGP). Policies are hard on hardware. We need low-level controllers at deployment, which means we need them during training. FGPs increase hardware lifetime, enable
1
0
2
@robot_trainer
Nathan Ratliff
4 months
A first step was DextrAH-G, which consumed depth images. We deploy to real quickly and early to test the waters and iterate on the training approach. That’s all possible due to SOTA vectorized low-level controllers keeping the robot safe (fabrics). https://t.co/VYyFBwtBNP
@robot_trainer
Nathan Ratliff
1 year
Exciting new work! Fast, robust, reactive, direct-from-sensor grasp-anything policies. RL really works, and it’s going to transform the entire robotics economy. DextrAH-G: Dexterous Arm-Hand Grasping https://t.co/z2a2YOkoI1
1
0
2
@robot_trainer
Nathan Ratliff
4 months
Also, see @ritvik_singh9’s overview of the work on Atlas with Boston Dynamics. It links to his own thread on DextrAH-RGB in the second post, covering more technical details (he's first author). https://t.co/A5hi2CEtET
@ritvik_singh9
Ritvik Singh
7 months
Over the past few months we've been working with Boston Dynamics on end-to-end dexterous manipulation for the EAtlas:
1
0
2
@robot_trainer
Nathan Ratliff
4 months
The above is a collaboration with Boston Dynamics building off our latest work on DextrAH-RGB. Here’s an overview thread on DextrAH-RGB: https://t.co/6leydFy5GX We train visuomotor grasp-anything policies in simulation using a distillation pipeline and massive randomization.
@robot_trainer
Nathan Ratliff
9 months
Next step in dynamic dexterous grasping from NVIDIA: DextrAH-RGB! No more depth. We’re now consuming RGB stereo pairs, and the resulting perceptual system is much more robust. Trained entirely in sim (IsaacLab), leveraging fast tiled rendering, and deployed zero-shot to real.
1
0
5
@robot_trainer
Nathan Ratliff
4 months
The Dex team at NVIDIA is defining the bleeding edge of sim2real dexterity. Take a look below 🧵 There's a lot happening at NVIDIA in robotics, and we’re looking for good people! Reach out if you're interested. We have some big things brewing (and scaling :)
4
49
340
@robot_trainer
Nathan Ratliff
4 months
science
@RussTedrake
Russ Tedrake
4 months
TRI's latest Large Behavior Model (LBM) paper landed on arxiv last night! Check out our project website: https://t.co/n0qmDRivRH One of our main goals for this paper was to put out a very careful and thorough study on the topic to help people understand the state of the
0
0
4
@robot_trainer
Nathan Ratliff
4 months
good points. openai created llms at scale well before chatgpt, but chatgpt made them accessible and that was arguably more impactful. all the recent models have been technically spectacular, but making them universally accessible may again be felt (significantly) more strongly by
@kimmonismus
Chubby♨️
4 months
A brief overview of GPT-5 GPT-5 could disappoint some and amaze many. It's a strange contradiction, but I'll try to explain. For “hardcore” users, GPT-5 will be a bit of a disappointment, if the rumors are to be believed. Rumor has it that Sam Altman is not particularly
1
0
0