Suddhus Profile Banner
Sudharshan Suresh Profile
Sudharshan Suresh

@Suddhus

Followers
1K
Following
3K
Media
40
Statuses
284

Tech lead & Research Scientist @BostonDynamics Atlas // Prev: @AIatMeta and PhD at @CMU_Robotics.

Cambridge, MA
Joined April 2009
Don't wanna be here? Send us removal request.
@Suddhus
Sudharshan Suresh
5 months
I'm a featured interview in our latest behind-the-scenes release! We break down the ML and perception that drives the whole-body manipulation behaviors from last year. It starts with a neat demo of Atlas's range-of-motion and our vision foundation models.
5
7
51
@xiao_ted
Ted Xiao
1 month
Same for ego data, UMI data, etc. An open secret is “we used ego4d” actually means filtering out the 1% of videos that are vaguely useful for learning. Occlusions, suboptimality, sensor noise, and so many pitfalls! Modeling + collect co-design is only gold standard currently.
1
4
50
@kevin_zakka
Kevin Zakka
1 month
I'm super excited to announce mjlab today! mjlab = Isaac Lab's APIs + best-in-class MuJoCo physics + massively parallel GPU acceleration Built directly on MuJoCo Warp with the abstractions you love.
32
141
855
@lucas_manuelli
Lucas Manuelli
1 month
I'll be giving a talk @CoRL2025 about developing large behavior models on Atlas. Come by the dexterous manipulation workshop at 11:30 to see the talk,
0
2
19
@LawrenceZhu22
Lawrence Yunzhou Zhu
2 months
Can we scale up mobile manipulation with egocentric human data? Meet EMMA: Egocentric Mobile MAnipulation EMMA learns from human mobile manipulation + static robot data — no mobile teleop needed! EMMA generalizes to new scenes and scales strongly with added human data. 1/9
10
64
414
@Suddhus
Sudharshan Suresh
3 months
Lucas and co. wrote a great blogpost on the careful science and engineering behind language-conditioned policies for whole-body manipulation! There's a lot more work on the horizon; our team is hiring researchers to scale egocentric human data and VLMs for robotics. Reach out!
@lucas_manuelli
Lucas Manuelli
3 months
Today I’m proud to share what I’ve been working on recently with my team at @BostonDynamics along with our collaborators at @ToyotaResearch . https://t.co/yExkGIdwxb
0
0
11
@Suddhus
Sudharshan Suresh
3 months
I had a great time presenting at the workshop - Check out my talk and the other amazing panelists here! https://t.co/6qlZMv0fwP
@HaozhiQ
Haozhi Qi
4 months
📹Recording now available! If you missed our workshop at RSS, you can now watch the full session here: https://t.co/1VnLjRyleN Thanks again to all the speakers and participants!
0
0
3
@Suddhus
Sudharshan Suresh
3 months
Viser is such an amazing tool for the community - congrats to @brenthyi and co!
@brenthyi
Brent Yi
3 months
July has been a big month for Viser! - Released v1.0.0😊 - We did some writing Some demos👇
0
0
4
@RussTedrake
Russ Tedrake
4 months
TRI's latest Large Behavior Model (LBM) paper landed on arxiv last night! Check out our project website: https://t.co/n0qmDRivRH One of our main goals for this paper was to put out a very careful and thorough study on the topic to help people understand the state of the
8
106
491
@haldar_siddhant
Siddhant Haldar
4 months
Current robot policies often face a tradeoff: they're either precise (but brittle) or generalizable (but imprecise). We present ViTaL, a framework that lets robots generalize precise, contact-rich manipulation skills across unseen environments with millimeter-level precision. 🧵
9
78
595
@Suddhus
Sudharshan Suresh
5 months
I'll be speaking at the RSS Dexterous Manipulation Workshop tomorrow, discussing our recent work with Atlas!
@HaozhiQ
Haozhi Qi
5 months
We are excited to host the 3rd Workshop on Dexterous Manipulation at RSS tomorrow! Join us at OHE 122 starting at 9:00 AM! See you there!
1
4
20
@GeneralistAI
Generalist
5 months
Today we're excited to share a glimpse of what we're building at Generalist. As a first step towards our mission of making general-purpose robots a reality, we're pushing the frontiers of what end-to-end AI models can achieve in the real world. Here's a preview of our early
33
150
868
@ZhaoMandi
Mandi Zhao
5 months
How to learn dexterous manipulation for any robot hand from a single human demonstration? Check out DexMachina, our new RL algorithm that learns long-horizon, bimanual dexterous policies for a variety of dexterous hands, articulated objects, and complex motions.
19
108
622
@Suddhus
Sudharshan Suresh
5 months
Our team also put together a great blogpost: https://t.co/MaxojplNQ1. We talk about supervised learning for 2D keypoints, foundation models for object masks, render-and-compare for object pose, and SuperTracker—our real-time smoother that combines vision, kinematics, and force.
Tweet card summary image
bostondynamics.com
The Atlas perception team at Boston Dynamics shares their insights on building agile and adaptable humanoid robotics.
0
1
11
@Suddhus
Sudharshan Suresh
5 months
Check out the awesome follow-up to sparsh by @akashshrm02 and others - self-supervised learning for a variety of downstream tasks with tactile skins!
@akashshrm02
Akash Sharma
6 months
Robots need touch for human-like hands to reach the goal of general manipulation. However, approaches today don’t use tactile sensing or use specific architectures per tactile task. Can 1 model improve many tactile tasks? 🌟Introducing Sparsh-skin: https://t.co/DgTq9OPMap 1/6
0
0
8
@Suddhus
Sudharshan Suresh
6 months
Honored to be a part of the RSS Pioneers cohort this year - look forward to catching up with folks in Los Angeles! 🤖
@RSSPioneers
RSS Pioneers
6 months
List of 33 #RSSPioneer2025 is out! Their research interests cover fundamental robot design, modelling and control, robot perception and learning, localisation and mapping, human-robot interaction, healthcare and medical robotics, and soft robots! https://t.co/pSQb8TIZtt
2
0
16
@_kainoa_
Franziska Meier
7 months
New work from the Robotics team at @AIatMeta . Want to be able to tell your robot bring you the keys from the table in the living room? Try out Locate 3D! interactive demo: https://t.co/aS9WPPmhcF model & code & dataset: https://t.co/oMWc32VrH9
0
5
50
@LerrelPinto
Lerrel Pinto
7 months
So excited for this!!! The key technical breakthrough here is that we can control joints and fingertips of the robot **without joint encoders**. Learning from self-supervised data collection is all you need for training the humanoid hand control you see below.
@irmakkguzey
Irmak Guzey
7 months
Despite great advances in learning dexterity, hardware remains a major bottleneck. Most dexterous hands are either bulky, weak or expensive. I’m thrilled to present the RUKA Hand — a powerful, accessible research tool for dexterous manipulation that overcomes these limitations!
4
14
138
@XiaoyangWu_
Xiaoyang Wu
8 months
📢Sonata: Self-Supervised Learning of Reliable Point Representations📢 Meet Sonata, our"3D-DINO" pre-trained with Point Transformer V3, accepted at #CVPR2025! 🌍: https://t.co/x8k3v1kBw9 📦: https://t.co/dmcEAOafkE 🚀: https://t.co/VWTFyIj7De 🔹Semantic-aware and spatial
4
50
194
@BostonDynamics
Boston Dynamics
8 months
Atlas is demonstrating reinforcement learning policies developed using a motion capture suit. This demonstration was developed in partnership with Boston Dynamics and @rai_inst.
855
5K
20K