erwincoumans Profile Banner
Erwin Coumans 🇺🇦 Profile
Erwin Coumans 🇺🇦

@erwincoumans

Followers
6K
Following
4K
Media
392
Statuses
2K

NVIDIA, Physics Simulation,Robotics Learning

Joined March 2014
Don't wanna be here? Send us removal request.
@erwincoumans
Erwin Coumans 🇺🇦
6 years
We released source code of our ongoing research "Tiny Differentiable Simulator", a header-only C++ physics library with zero dependencies. Created with @eric_heiden who just wrapped up his internship with us at Google Robotics.
Tweet card summary image
github.com
Tiny Differentiable Simulator is a header-only C++ and CUDA physics library for reinforcement learning and robotics with zero dependencies. - erwincoumans/tiny-differentiable-simulator
17
103
580
@alescontrela
Alejandro Escontrela
28 days
Simulation drives robotics progress, but how do we close the reality gap? Introducing GaussGym: an open-source framework for learning locomotion from pixels with ultra-fast parallelized photorealistic rendering across >4,000 iPhone, GrandTour, ARKit, and Veo scenes! Thread 🧵
11
64
333
@UnitreeRobotics
Unitree
30 days
Unitree Introducing | Unitree H2 Destiny Awakening!🥳 Welcome to this world — standing 180cm tall and weighing 70kg. The H2 bionic humanoid - born to serve everyone safely and friendly.
971
2K
11K
@zhenyuzhao123
Zhenyu Zhao
1 month
Introducing 🚀 Humanoid Everyday — a large, real-world dataset for humanoid whole-body manipulation. Unlike most humanoid data (fixed bases, narrow tasks), ours covers diverse, locomotion-integrated skills. 🔗 Website: https://t.co/0wmXltt13R 📄 Paper: https://t.co/lt8V6HZIO3
8
56
309
@sunfanyun
Fan-Yun Sun
1 month
Works like this deserve a lot more attention. In embodied AI, the most common argument against physics-based simulation is that deformables are hard/expensive to simulate. Offset Geometric Contact just made it 300x faster. It computes vertex-specific displacement bounds to
29
182
2K
@kevin_zakka
Kevin Zakka
1 month
It was a joy bringing Jason’s signature spin-kick to life on the @UnitreeRobotics G1. We trained it in mjlab with the BeyondMimic recipe but had issues on hardware last night (the IMU gyro was saturating). One more sim-tuning pass and we nailed it today. With @qiayuanliao and
@xbpeng4
Jason Peng
1 month
Implementing motion imitation methods involves lots of nuisances. Not many codebases get all the details right. So, we're excited to release MimicKit! https://t.co/7enUVUkc3h A framework with high quality implementations of our methods: DeepMimic, AMP, ASE, ADD, and more to come!
26
92
655
@zhengyiluo
Zhengyi “Zen” Luo
2 months
If you missed @yukez’s talk at #CoRL2025 here is the link https://t.co/zxA1l7xQzB 👇demo we at GEAR have been cranking at: fully autonomous, human like, locomanipulation via language + vision input. Uncut. Sleepless nights to get the humanoid to move naturally pays off🥹
5
30
180
@erwincoumans
Erwin Coumans 🇺🇦
3 months
Very impressive @ZhiSu22 I was involved in the Google Brain/Deepmind table tennis project (using PyBullet simulation initially to train and for safety). Unbelievable that a G1 can be so agile to play the game. Hoping you release the code / policy for others to try it out (?!)
@TheHumanoidHub
The Humanoid Hub
3 months
Humanoid robots playing table tennis fully autonomously. The 'HITTER' system combines a model-based planner with a reinforcement learning (RL) whole-body controller. It is fully autonomous but relies on an external sensing system. A 9-camera OptiTrack motion capture setup
1
1
38
@erwincoumans
Erwin Coumans 🇺🇦
3 months
Ssh, don't tell anyone, way too preliminary to be amplified: https://t.co/scNYdUOT5L https://t.co/WWUVzLMsfg
5
10
142
@jacobaustin132
Jacob Austin
3 months
Today we're putting out an update to the JAX TPU book, this time on GPUs. How do GPUs work, especially compared to TPUs? How are they networked? And how does this affect LLM training? 1/n
38
525
3K
@Adithya_Murali_
Adithya Murali
4 months
I’m thrilled to announce that we just released GraspGen, a multi-year project we have been cooking at @NVIDIARobotics 🚀 GraspGen: A Diffusion-Based Framework for 6-DOF Grasping Grasping is a foundational challenge in robotics 🤖 — whether for industrial picking or
2
34
206
@NVIDIARobotics
NVIDIA Robotics
6 months
Join experts Yuval Tassa from @GoogleDeepMind and Miles Macklin from NVIDIA at #GTCParis to learn about Newton, an open-source, extensible physics engine for robotics simulation, co-developed by Disney Research, Google DeepMind and NVIDIA. Gain insights into breakthroughs in
8
74
429
@bfspector
Benjamin F Spector
6 months
(1/5) We’ve never enjoyed watching people chop Llamas into tiny pieces. So, we’re excited to be releasing our Low-Latency-Llama Megakernel! We run the whole forward pass in single kernel. Megakernels are faster & more humane. Here’s how to treat your Llamas ethically: (Joint
34
144
887
@ID_AA_Carmack
John Carmack
6 months
The full video of my Upper Bound 2025 talk about our research directions should be available at some point, but here are my slides: https://t.co/KPM6NtSaug And here are the notes I made while preparing, which are more extensive than what I had time to say:
72
182
2K
@xuxin_cheng
Xuxin Cheng
7 months
Meet 𝐀𝐌𝐎 — our universal whole‑body controller that unleashes the 𝐟𝐮𝐥𝐥  kinematic workspace of humanoid robots to the physical world. AMO is a single policy trained with RL + Hybrid Mocap & Trajectory‑Opt. Accepted to #RSS2025. Try our open models & more 👉
24
115
570
@erwincoumans
Erwin Coumans 🇺🇦
7 months
Gemini 2.5 has built-in spatial 3d capabilities!
@shreyasgite
Shreyas Gite
7 months
Not many know that Gemini 2.5 or Flash can do zero-shot embodied reasoning (ER) for tasks like planning trajectories, grasping points, 3D boxes etc. Which is good enough for most research & experimentation. So if you in the waitlist and still haven't had access to official
0
0
4
@yukez
Yuke Zhu
8 months
Sim-and-real co-training is the key technique behind GR00T's ability to learn across the data pyramid. Our latest study shows how synthetic and real-world data can be jointly leveraged to train robust, generalizable vision-based manipulation policies. 📚 https://t.co/02e0cR6X9T
2
50
288
@RogerQiu_42
Roger Qiu
8 months
Diverse training data leads to a more robust humanoid manipulation policy, but collecting robot demonstrations is slow. Introducing our latest work, Humanoid Policy ~ Human Policy. We advocate human data as a scalable data source for co-training egocentric manipulation policy.⬇️
8
54
246
@erwincoumans
Erwin Coumans 🇺🇦
8 months
So the PhysX CUDA kernel source code is now open source!!! https://t.co/HmgSQ8P1cd and
Tweet card summary image
github.com
NVIDIA PhysX SDK. Contribute to NVIDIA-Omniverse/PhysX development by creating an account on GitHub.
11
138
711
@erwincoumans
Erwin Coumans 🇺🇦
8 months
My G1 humanoid home setup to test NVIDIA's new GROOT-N1 model, led by @DrJimFan and @yukez . Sitting in a wheel chair to focus on manipulation, with a test data set on Huggingface: https://t.co/1InLeVVu60 https://t.co/CtF3UgatJM With @AltBionics hands and @ManusMeta gloves.
8
20
227
@NVIDIAAIDev
NVIDIA AI Developer
8 months
Spatial AI is increasingly important, and the newest papers from #NVIDIAResearch, 3DGRT and 3DGUT, represent significant advancements in enabling researchers and developers to explore and innovate with 3D Gaussian Splatting techniques. 💎 3DGRT (Gaussian Ray Tracing) ➡️
2
49
224