
Ajay Mandlekar
@AjayMandlekar
Followers
3K
Following
991
Media
55
Statuses
213
NVIDIA AI Research Scientist | EE PhD @Stanford | Teaching 🤖 to imitate humans.
Stanford, CA
Joined November 2019
Tired of endlessly teleoperating your robot in order to train it? . Introducing SkillMimicGen, a data generation system that automatically scales robot imitation learning by synthesizing demos through integrating motion planning and demo adaptation. 1/
3
32
172
We open-source everything - datasets, simulation environments, our data generation framework, and training code, built on our new robomimic v0.5 release!.
github.com
Contribute to GaTech-RL2/mimiclabs development by creating an account on GitHub.
0
0
1
Large datasets play a crucial role in modern robotics but we don’t understand what data is the most important to collect. I’m thrilled to announce MimicLabs, a synthetic data generation approach to studying this problem!. Check it out! .
Large robot datasets are crucial for training 🤖foundation models. Yet, we lack systematic understanding of what data matters. Introducing MimicLabs. ✅System to generate large synthetic robot 🦾 datasets.✅Data-composition study 🗄️ on how to collect and use large datasets. 🧵1/
2
4
39
RT @DJiafei: 1/ 🚀 Announcing #GenPriors — the CoRL 2025 workshop on Generalizable Priors for Robot Manipulation!. 📍 Seoul, Korea 📅 Sat 27 S….
0
9
0
RT @RussTedrake: TRI's latest Large Behavior Model (LBM) paper landed on arxiv last night! Check out our project website: .
0
106
0
RT @haldar_siddhant: Current robot policies often face a tradeoff: they're either precise (but brittle) or generalizable (but imprecise).….
0
77
0
Excited to share DexMachina, our new algorithm that can learn dexterous manipulation across different robot hands all from just a single human demonstration. Great work led by @ZhaoMandi during her internship in our group!.
How to learn dexterous manipulation for any robot hand from a single human demonstration?. Check out DexMachina, our new RL algorithm that learns long-horizon, bimanual dexterous policies for a variety of dexterous hands, articulated objects, and complex motions.
0
1
37
Code for AHA has been released - we hope this makes using VLMs for failure reasoning in robotics more accessible!. Code: Website: Paper:
arxiv.org
Robotic manipulation in open-world settings requires not only task execution but also the ability to detect and learn from failures. While recent advances in vision-language models (VLMs) and...
Thrilled to announce our code is now open source! If you're looking to generate robot failure data for fine-tuning your VLM/VLA for reasoning, dive in here: Fine-tune and train your own AHA-13B and join us at #ICLR2025 in Singapore as we present our work.
0
0
15
Synthetic data generation tools like MimicGen create large sim datasets with ease, but using them in the real-world is difficult due to the large sim-to-real gap. Our new work uses simple co-training to unlock the potential of synthetic sim data for real-world manipulation!.
How to use simulation data for real-world robot manipulation? We present sim-and-real co-training, a simple recipe for manipulation. We demonstrate that sim data can significantly enhance real-world performance, even with notable differences between the sim and the real. (1/n)
0
3
32
RT @bowenwen_me: 📢Time to upgrade your depth camera! Introducing **FoundationStereo**, a foundation model for stereo depth estimation in ze….
0
98
0
Excited to announce that the DexMimicGen simulation environments, datasets, and code to reproduce policy learning results, have been released!.
github.com
This code corresponds to simulation environments used as part of the DexMimicGen project. - NVlabs/dexmimicgen
DexMimicGen is officially accepted to ICRA 2025! See y'all in Atlanta!. We also released the datasets generated by DexMimicGen, along with simulation environments and training configs to reproduce our policy learning results with robomimic. Check it out!.
0
5
49
RT @haldar_siddhant: The most frustrating part of imitation learning is collecting huge amounts of teleop data. But why teleop robots when….
0
48
0
RT @ryan_hoque: 🚨 New research from my team at Apple - real-time augmented reality robot feedback with just your hands + Vision Pro! . Pape….
0
40
0
RT @danfei_xu: I gave an Early Career Keynote at CoRL 2024 on Robot Learning from Embodied Human Data. Recording: .
0
24
0
RT @jang_yoel: Excited to share that 𝐋𝐀𝐏𝐀 has won the Best Paper Award at the CoRL 2024 Language and Robot Learning workshop, selected amon….
0
12
0
RT @kywch500: I wanted to test LeRobot on complex tasks without a physical robot arm. MimicGen has 26K+ demonstrations across 12 tasks. I c….
github.com
Converts MimicGen dataset into LeRobot format, to train and evaluate the ACT, BC, and diffusion policies - kywch/mg2hfbot
0
9
0
RT @danfei_xu: MimicGen is instrumental to our robot data generation pipeline. Excited to see another big step in this direction!.
0
1
0
Some very cool results on challenging long-horizon bimanual manipulation!.
How can robots compositionally generalize over multi-object multi-robot tasks for long-horizon planning?. At #CoRL2024, we introduce Generative Factor Chaining (GFC), a diffusion-based approach that composes spatial-temporal factors into long-horizon skill plans. (1/7)
0
0
3
A very cool imitation learning project that makes use of efficient robot-free data collection, smart robot hardware design, and co-training on human and robot data jointly - congrats to the team!.
Introducing EgoMimic - just wear a pair of Project Aria @meta_aria smart glasses 👓 to scale up your imitation learning datasets!. Check out what our robot can do. A thread below👇
0
1
12
Data collection for humanoids is painful. Can we use simulation to automate it?. Introducing DexMimicGen, the newest iteration of the MimicGen data generation system! DexMimicGen trains near-perfect agents for a wide range of challenging bimanual dexterous tasks.
How can we scale up humanoid data acquisition with minimal human effort?.Introducing DexMimicGen, a large-scale automated data generation system that synthesizes trajectories from a few human demonstrations for humanoid robots with dexterous hands. (1/n)
0
1
18