
Roberto
@RobobertoMM
Followers
2K
Following
478
Media
19
Statuses
166
Assistant CS Professor at UT Austin. Former Stanford and TUBerlin. Researching at the intersection of vision, learning and robotics 🏳️🌈
Joined August 2019
Honored to give an Early Career Invited Talk at #IJCAI today. See you at 11:30am in room 520C!.
1
3
24
It was time to improve our evaluations in robot learning! We introduce a methodology based on anonymous A/B testing: fairer, stronger, community-driven. Awesome work by @KarlPertsch @pranav_atreya @tonyh_lee and an incredible crowdsourcing team. Upload and test your model! 🚀.
We’re releasing the RoboArena today!🤖🦾. Fair & scalable evaluation is a major bottleneck for research on generalist policies. We’re hoping that RoboArena can help!. We provide data, model code & sim evals for debugging! Submit your policies today and join the leaderboard! :).🧵
0
3
20
RT @ArpitBahety: Imagine a future where robots are part of our daily lives — How can end users teach robots new tasks by directly showing t….
0
17
0
RT @huihan_liu: Meet Casper👻, a friendly robot sidekick who shadows your day, decodes your intents on the fly, and lends a hand while you s….
0
37
0
🚨RL training for contact-rich tasks with a mobile manipulator IN THE REAL WORLD?!🤯.We're not crazy—just equipped with the right action space!.SLAC learns a safe, effective action space via unsupervised RL in sim, enabling real-world RL training in minutes. Check it out!🚀.
Real-world RL, where robots learn directly from physical interactions, is extremely challenging — especially for high-DoF systems like mobile manipulators. 1⃣ Long-horizon tasks and large action spaces lead to difficult policy optimization. 2⃣ Real-world exploration with
0
2
22
RT @JiahengHu1: Excited to be in ATL for #ICRA2025 to present 🔥FLaRe: fine-tuning large transformer policies with #RL, 15:25 Tuesday @ room….
0
8
0
Loved working on this with our MIT/Stanford/OpenAI collaborators. It brings "The Bitter Lesson" to data curation: skip the hand-tuned heuristics (visual similarity, motion. ) and let the data speak for itself! Datamodels is a fascinating framework 🤯.
Ever wondered which data from large datasets (like OXE) actually helps when training/tuning a policy for specific tasks?. We present DataMIL, a framework for measuring how each training sample influences policy performance, hence enabling effective data selection 🧵
0
4
38
RT @EliasEskin: Extremely excited to announce that I will be joining @UTAustin @UTCompSci in August 2025 as an Assistant Professor! 🎉. I’m….
0
65
0
So happy for @JiahengHu1 ! He has been rocking it, with outstanding work that pushes the limits of what robot learning can achieve in mobile manipulation and other domains. And one of my first Ph.D. students! Congratulations! 🦾🦾🦾🦾.
I'm honored to be awarded the 2025 Two Sigma PhD fellowship, and extremely grateful to my two amazing advisors @RobobertoMM @PeterStone_TX ! Looking forward to continuing to advance the field of RL and Robotics.
0
0
4
RT @xf1280: ✨Super excited to share what the team has been working on!. ♊️🤖 Gemini Robotics is a family of frontier models that are dexter….
0
14
0
Giving a talk as New Faculty Highlight at AAAI tomorrow morning (9:30am)! Come if you want to get an overview of some of the works from the lab.
aaai.org
The Thirty-Ninth AAAI Conference on Artificial Intelligence will be held in Philadelphia at the Pennsylvania Convention Center in 2025.
0
4
18
Tired of guessing what tasks people want robots to do for them? Check our study! We correlate time spent and emotions people felt while performing tasks with the desire to automate them, comparing between different groups. And with an online tool for you to play with the data!.
🤔What tasks do we want robots to handle? Are these preferences based on saved time or feelings we associate with the tasks?. Introducing Why Automate This?—a study exploring automation preferences across social groups, using feelings & time-spent as key factors. 👇 (1/5)
0
0
8
RT @duke_zzwang: In multi-object env, why do most Unsupervised Skill Discovery methods fail to learn complex skills like tool use? Because….
0
12
0
I really like this work from @ShivinDass. It unifies subfields like active and interactive perception within a single theoretical foundation: contextual MDPs. it is all about learning the best actions to find task-relevant information=context! If you are at CoRL24, talk to him.
Intelligent agents such as humans explore their surroundings to gather information and complete tasks. In our #CoRL2024 work -- Learning to Look 👀, we teach robots to find relevant information in their environment. 🤖✨. 🌐: 🧵👇
0
0
10
RT @JiahengHu1: 🚀Unsupervised RL can learn skills purely from reward-free interactions with an environment. But what form of skills can fac….
0
15
0
We just won the IROS Best Paper Award on Mechanisms for BaRiFlex! 🦾 Congratulations to all my coauthors and @GucheolJ5253 for the fantastic work. BTW, we have been using it in our high school summer school to teach students about mechanisms and imitation learning. A lot of fun!
Are you worried your robot arm will break during manipulation or learning tasks in human unstructured environments?.Meet BaRiFlex: a versatile, collision-robust, and affordable robotic gripper designed for resilient robot learning! 🤖✨👇 (1/6).🌐
2
6
59
This will be presented tomorrow at IROS. Simple and cheap hand that you can build yourself for contact-rich and robot learning tasks!.
Are you worried your robot arm will break during manipulation or learning tasks in human unstructured environments?.Meet BaRiFlex: a versatile, collision-robust, and affordable robotic gripper designed for resilient robot learning! 🤖✨👇 (1/6).🌐
0
1
7
Let's enable mobile manipulators to perform building-wide tasks! Rutav and the team have made it possible by integrating perception and reasoning in a multi-step central decision-maker harnessing VLM advances that learn from previous mistakes. Take a look!.
🤖 Want your robot to grab you a drink from the kitchen downstairs?. 🚀 Introducing BUMBLE: a framework to solve building-wide mobile manipulation tasks by harnessing the power of Vision-Language Models (VLMs). 👇 (1/5). 🌐
1
5
27
Join us in the IROS workshop on Environment Dynamics Matters: Embodied Navigation to Movable Objects I'll give a talk there at 4:30 pm Abu Dhabi time. It will be fun! 🤖.
0
0
5
Impressive results on a problem critical to scale large models in robotics. Inspiring work led by @JiahengHu1, @ehsanik, and the AI2 team.
🚀 Despite efforts to scale up Behavior Cloning for Robots, large-scale BC has yet to live up to its promise. How can we break through the performance plateau? Introducing 🔥FLaRe: fine-tuning large-scale robot policies with Reinforcement Learning. 🧵
0
1
8