Erwin Coumans 🇺🇦
@erwincoumans
Followers
6K
Following
4K
Media
392
Statuses
2K
NVIDIA, Physics Simulation,Robotics Learning
Joined March 2014
We released source code of our ongoing research "Tiny Differentiable Simulator", a header-only C++ physics library with zero dependencies. Created with @eric_heiden who just wrapped up his internship with us at Google Robotics.
github.com
Tiny Differentiable Simulator is a header-only C++ and CUDA physics library for reinforcement learning and robotics with zero dependencies. - erwincoumans/tiny-differentiable-simulator
17
103
580
Simulation drives robotics progress, but how do we close the reality gap? Introducing GaussGym: an open-source framework for learning locomotion from pixels with ultra-fast parallelized photorealistic rendering across >4,000 iPhone, GrandTour, ARKit, and Veo scenes! Thread 🧵
11
64
333
Unitree Introducing | Unitree H2 Destiny Awakening!🥳 Welcome to this world — standing 180cm tall and weighing 70kg. The H2 bionic humanoid - born to serve everyone safely and friendly.
971
2K
11K
Introducing 🚀 Humanoid Everyday — a large, real-world dataset for humanoid whole-body manipulation. Unlike most humanoid data (fixed bases, narrow tasks), ours covers diverse, locomotion-integrated skills. 🔗 Website: https://t.co/0wmXltt13R 📄 Paper: https://t.co/lt8V6HZIO3
8
56
309
Works like this deserve a lot more attention. In embodied AI, the most common argument against physics-based simulation is that deformables are hard/expensive to simulate. Offset Geometric Contact just made it 300x faster. It computes vertex-specific displacement bounds to
29
182
2K
It was a joy bringing Jason’s signature spin-kick to life on the @UnitreeRobotics G1. We trained it in mjlab with the BeyondMimic recipe but had issues on hardware last night (the IMU gyro was saturating). One more sim-tuning pass and we nailed it today. With @qiayuanliao and
Implementing motion imitation methods involves lots of nuisances. Not many codebases get all the details right. So, we're excited to release MimicKit! https://t.co/7enUVUkc3h A framework with high quality implementations of our methods: DeepMimic, AMP, ASE, ADD, and more to come!
26
92
655
If you missed @yukez’s talk at #CoRL2025 here is the link https://t.co/zxA1l7xQzB 👇demo we at GEAR have been cranking at: fully autonomous, human like, locomanipulation via language + vision input. Uncut. Sleepless nights to get the humanoid to move naturally pays off🥹
5
30
180
Very impressive @ZhiSu22 I was involved in the Google Brain/Deepmind table tennis project (using PyBullet simulation initially to train and for safety). Unbelievable that a G1 can be so agile to play the game. Hoping you release the code / policy for others to try it out (?!)
Humanoid robots playing table tennis fully autonomously. The 'HITTER' system combines a model-based planner with a reinforcement learning (RL) whole-body controller. It is fully autonomous but relies on an external sensing system. A 9-camera OptiTrack motion capture setup
1
1
38
Ssh, don't tell anyone, way too preliminary to be amplified: https://t.co/scNYdUOT5L
https://t.co/WWUVzLMsfg
5
10
142
Today we're putting out an update to the JAX TPU book, this time on GPUs. How do GPUs work, especially compared to TPUs? How are they networked? And how does this affect LLM training? 1/n
38
525
3K
I’m thrilled to announce that we just released GraspGen, a multi-year project we have been cooking at @NVIDIARobotics 🚀 GraspGen: A Diffusion-Based Framework for 6-DOF Grasping Grasping is a foundational challenge in robotics 🤖 — whether for industrial picking or
2
34
206
Join experts Yuval Tassa from @GoogleDeepMind and Miles Macklin from NVIDIA at #GTCParis to learn about Newton, an open-source, extensible physics engine for robotics simulation, co-developed by Disney Research, Google DeepMind and NVIDIA. Gain insights into breakthroughs in
8
74
429
(1/5) We’ve never enjoyed watching people chop Llamas into tiny pieces. So, we’re excited to be releasing our Low-Latency-Llama Megakernel! We run the whole forward pass in single kernel. Megakernels are faster & more humane. Here’s how to treat your Llamas ethically: (Joint
34
144
887
The full video of my Upper Bound 2025 talk about our research directions should be available at some point, but here are my slides: https://t.co/KPM6NtSaug And here are the notes I made while preparing, which are more extensive than what I had time to say:
72
182
2K
Meet 𝐀𝐌𝐎 — our universal whole‑body controller that unleashes the 𝐟𝐮𝐥𝐥 kinematic workspace of humanoid robots to the physical world. AMO is a single policy trained with RL + Hybrid Mocap & Trajectory‑Opt. Accepted to #RSS2025. Try our open models & more 👉
24
115
570
Gemini 2.5 has built-in spatial 3d capabilities!
Not many know that Gemini 2.5 or Flash can do zero-shot embodied reasoning (ER) for tasks like planning trajectories, grasping points, 3D boxes etc. Which is good enough for most research & experimentation. So if you in the waitlist and still haven't had access to official
0
0
4
Sim-and-real co-training is the key technique behind GR00T's ability to learn across the data pyramid. Our latest study shows how synthetic and real-world data can be jointly leveraged to train robust, generalizable vision-based manipulation policies. 📚 https://t.co/02e0cR6X9T
2
50
288
Diverse training data leads to a more robust humanoid manipulation policy, but collecting robot demonstrations is slow. Introducing our latest work, Humanoid Policy ~ Human Policy. We advocate human data as a scalable data source for co-training egocentric manipulation policy.⬇️
8
54
246
So the PhysX CUDA kernel source code is now open source!!! https://t.co/HmgSQ8P1cd and
github.com
NVIDIA PhysX SDK. Contribute to NVIDIA-Omniverse/PhysX development by creating an account on GitHub.
11
138
711
My G1 humanoid home setup to test NVIDIA's new GROOT-N1 model, led by @DrJimFan and @yukez . Sitting in a wheel chair to focus on manipulation, with a test data set on Huggingface: https://t.co/1InLeVVu60
https://t.co/CtF3UgatJM With @AltBionics hands and @ManusMeta gloves.
8
20
227
Spatial AI is increasingly important, and the newest papers from #NVIDIAResearch, 3DGRT and 3DGUT, represent significant advancements in enabling researchers and developers to explore and innovate with 3D Gaussian Splatting techniques. 💎 3DGRT (Gaussian Ray Tracing) ➡️
2
49
224