Sudharshan Suresh
@Suddhus
Followers
1K
Following
3K
Media
40
Statuses
284
Tech lead & Research Scientist @BostonDynamics Atlas // Prev: @AIatMeta and PhD at @CMU_Robotics.
Cambridge, MA
Joined April 2009
I'm a featured interview in our latest behind-the-scenes release! We break down the ML and perception that drives the whole-body manipulation behaviors from last year. It starts with a neat demo of Atlas's range-of-motion and our vision foundation models.
5
7
51
Same for ego data, UMI data, etc. An open secret is “we used ego4d” actually means filtering out the 1% of videos that are vaguely useful for learning. Occlusions, suboptimality, sensor noise, and so many pitfalls! Modeling + collect co-design is only gold standard currently.
1
4
50
I'm super excited to announce mjlab today! mjlab = Isaac Lab's APIs + best-in-class MuJoCo physics + massively parallel GPU acceleration Built directly on MuJoCo Warp with the abstractions you love.
32
141
855
I'll be giving a talk @CoRL2025 about developing large behavior models on Atlas. Come by the dexterous manipulation workshop at 11:30 to see the talk,
0
2
19
Can we scale up mobile manipulation with egocentric human data? Meet EMMA: Egocentric Mobile MAnipulation EMMA learns from human mobile manipulation + static robot data — no mobile teleop needed! EMMA generalizes to new scenes and scales strongly with added human data. 1/9
10
64
414
Lucas and co. wrote a great blogpost on the careful science and engineering behind language-conditioned policies for whole-body manipulation! There's a lot more work on the horizon; our team is hiring researchers to scale egocentric human data and VLMs for robotics. Reach out!
Today I’m proud to share what I’ve been working on recently with my team at @BostonDynamics along with our collaborators at @ToyotaResearch . https://t.co/yExkGIdwxb
0
0
11
I had a great time presenting at the workshop - Check out my talk and the other amazing panelists here! https://t.co/6qlZMv0fwP
📹Recording now available! If you missed our workshop at RSS, you can now watch the full session here: https://t.co/1VnLjRyleN Thanks again to all the speakers and participants!
0
0
3
TRI's latest Large Behavior Model (LBM) paper landed on arxiv last night! Check out our project website: https://t.co/n0qmDRivRH One of our main goals for this paper was to put out a very careful and thorough study on the topic to help people understand the state of the
8
106
491
Current robot policies often face a tradeoff: they're either precise (but brittle) or generalizable (but imprecise). We present ViTaL, a framework that lets robots generalize precise, contact-rich manipulation skills across unseen environments with millimeter-level precision. 🧵
9
78
595
Today we're excited to share a glimpse of what we're building at Generalist. As a first step towards our mission of making general-purpose robots a reality, we're pushing the frontiers of what end-to-end AI models can achieve in the real world. Here's a preview of our early
33
150
868
How to learn dexterous manipulation for any robot hand from a single human demonstration? Check out DexMachina, our new RL algorithm that learns long-horizon, bimanual dexterous policies for a variety of dexterous hands, articulated objects, and complex motions.
19
108
622
Our team also put together a great blogpost: https://t.co/MaxojplNQ1. We talk about supervised learning for 2D keypoints, foundation models for object masks, render-and-compare for object pose, and SuperTracker—our real-time smoother that combines vision, kinematics, and force.
bostondynamics.com
The Atlas perception team at Boston Dynamics shares their insights on building agile and adaptable humanoid robotics.
0
1
11
Check out the awesome follow-up to sparsh by @akashshrm02 and others - self-supervised learning for a variety of downstream tasks with tactile skins!
Robots need touch for human-like hands to reach the goal of general manipulation. However, approaches today don’t use tactile sensing or use specific architectures per tactile task. Can 1 model improve many tactile tasks? 🌟Introducing Sparsh-skin: https://t.co/DgTq9OPMap 1/6
0
0
8
Honored to be a part of the RSS Pioneers cohort this year - look forward to catching up with folks in Los Angeles! 🤖
List of 33 #RSSPioneer2025 is out! Their research interests cover fundamental robot design, modelling and control, robot perception and learning, localisation and mapping, human-robot interaction, healthcare and medical robotics, and soft robots! https://t.co/pSQb8TIZtt
2
0
16
New work from the Robotics team at @AIatMeta . Want to be able to tell your robot bring you the keys from the table in the living room? Try out Locate 3D! interactive demo: https://t.co/aS9WPPmhcF model & code & dataset: https://t.co/oMWc32VrH9
0
5
50
So excited for this!!! The key technical breakthrough here is that we can control joints and fingertips of the robot **without joint encoders**. Learning from self-supervised data collection is all you need for training the humanoid hand control you see below.
Despite great advances in learning dexterity, hardware remains a major bottleneck. Most dexterous hands are either bulky, weak or expensive. I’m thrilled to present the RUKA Hand — a powerful, accessible research tool for dexterous manipulation that overcomes these limitations!
4
14
138
📢Sonata: Self-Supervised Learning of Reliable Point Representations📢 Meet Sonata, our"3D-DINO" pre-trained with Point Transformer V3, accepted at #CVPR2025! 🌍: https://t.co/x8k3v1kBw9 📦: https://t.co/dmcEAOafkE 🚀: https://t.co/VWTFyIj7De 🔹Semantic-aware and spatial
4
50
194
Atlas is demonstrating reinforcement learning policies developed using a motion capture suit. This demonstration was developed in partnership with Boston Dynamics and @rai_inst.
855
5K
20K