Arpit Bahety
@ArpitBahety
Followers
236
Following
84
Media
13
Statuses
47
[email protected] | UT Austin, PhD in CS | Columbia University, MS in CS | IIIT-Allahabad, B. Tech. in IT | Interested in AI & Robotics
Joined December 2020
We are excited to release MoMaGen, a data generation method for multi-step bimanual mobile manipulation. MoMaGen turns 1 human-teleoped robot trajectory into 1000s of generated trajectories automatically.π Website: https://t.co/DYKvqY4bII arXiv: https://t.co/lDffi0FXHl
1
35
157
We have been overlooking a key factor in LLM-as-a-judge evaluation: the feedback collection protocol. Our #COLM2025 paper presents a comprehensive study on how feedback protocols shape reliability and bias in LLM evaluations.
2
6
28
Which data is best for training few-shot imitation policies for robot manipulation? Some think itβs the data that looks similar, or has similar motion, or comes with related language labels. They are all right AND wrong: depending on the task, sometimes this similarity helps but
1
4
11
(1/N) How close are we to enabling robots to solve the long-horizon, complex tasks that matter in everyday life? π¨ We are thrilled to invite you to join the 1st BEHAVIOR Challenge @NeurIPS 2025, submission deadline: 11/15. π Prizes: π₯ $1,000 π₯ $500 π₯ $300
40
283
1K
A huge thanks to my amazing collaborators Arnav Balaji @babbatem @RobobertoMM! Work done at @RobInRoboticsUT at UT Austin @texas_robotics @UTAustin
0
0
3
For more details about SafeMimic and our experimental evaluation, also check out our paper and website. Paper: https://t.co/RfEQZomZGa Website:
1
0
3
We evaluate SafeMimic on 7 challenging tasks involving multi-step navigation and manipulation, articulated object manipulation (like opening oven, drawers) and contact-rich manipulation (like in the erase whiteboard task).
1
0
3
Here is SafeMimic in action for a task of loading a can in the oven. SafeMimic first extracts the semantic segments and the actions from the video. It then adapts the action to the robotβs embodiment in a safe and autonomous manner.
1
0
3
To enable the robot to act safely in the real world, we use an ensemble of safety Q-functions. Training them in the real world would be dangerous, as the robot needs to actually experience unsafe actions to learn. So, we train the Safety Q-functions in simulation.
1
0
3
SafeMimic first extracts the semantic segments, βwhatβ and the human actions, βhowβ from a single third person human video. Next, it safely and autonomously adapts the human actions to the robot's embodiment. Finally, it uses a policy memory to learn from successful trajectories.
1
0
3
We present SafeMimic, a framework to learn new multi-step mobile manipulation tasks safely and autonomously from a single third-person human video.
1
0
3
Imagine a future where robots are part of our daily lives β How can end users teach robots new tasks by directly showing them, just like teaching another person? π§΅π
3
17
44
Most assistive robots live in labs. We want to change that. FEAST enables care recipients to personalize mealtime assistance in-the-wild, with minimal researcher intervention across diverse in-home scenarios. π Outstanding Paper & Systems Paper Finalist @RoboticsSciSys π§΅1/8
5
69
326
π Despite efforts to scale up Behavior Cloning for Robots, large-scale BC has yet to live up to its promise. How can we break through the performance plateau? Introducing π₯FLaRe: fine-tuning large-scale robot policies with Reinforcement Learning. https://t.co/iRC1NTgoFI π§΅
1
37
122
π€ Want your robot to grab you a drink from the kitchen downstairs? π Introducing BUMBLE: a framework to solve building-wide mobile manipulation tasks by harnessing the power of Vision-Language Models (VLMs). π (1/5) π https://t.co/61eev1Jyvw
7
38
174
ScrewMimic received the Outstanding Student Paper Finalist award at #RSS2024. Congratulations to my co-authors @RobobertoMM @PrnkMandikal
@babbatem! π₯³
3
1
56
Excited to present ScrewMimic at RSS tomorrow (July 17, Wednesday)! Please check out the talk at 4 pm and the poster all day long in Senaatszaal.
1
3
30
Against all the network delays and spotty conference wifi, I'm glad we could make this demo happen π Checkout TeleMoMa for more details:
robin-lab.cs.utexas.edu
TeleMoMa: A Modular and Versatile Teleoperation System for Mobile Manipulation
Prof Roberto martin-martin giving a demo from Yokohama, teleoperating a robot from his laptop during a talk. The robot is at his lab in austin
0
3
15
We are excited to present ScrewMimic at ICRA 2024 on May 13 (Monday)! Please check out the spotlight talk (10 am) and our poster at the "Future Roadmap for Manipulation Skills" Workshop and our poster at the "Bimanual Manipulation" Workshop.
0
3
20
A huge thanks to my amazing collaborators @PrnkMandikal @babbatem @RobobertoMM! Work done at @RobInRoboticsUT at UT Austin @texas_robotics @UTAustin
0
0
1