Arpit Bahety Profile
Arpit Bahety

@ArpitBahety

Followers
236
Following
84
Media
13
Statuses
47

[email protected] | UT Austin, PhD in CS | Columbia University, MS in CS | IIIT-Allahabad, B. Tech. in IT | Interested in AI & Robotics

Joined December 2020
Don't wanna be here? Send us removal request.
@ChengshuEricLi
Chengshu Li
5 days
We are excited to release MoMaGen, a data generation method for multi-step bimanual mobile manipulation. MoMaGen turns 1 human-teleoped robot trajectory into 1000s of generated trajectories automatically.πŸš€ Website: https://t.co/DYKvqY4bII arXiv: https://t.co/lDffi0FXHl
1
35
157
@tuhina_tripathi
Tuhina Tripathi
21 days
We have been overlooking a key factor in LLM-as-a-judge evaluation: the feedback collection protocol. Our #COLM2025 paper presents a comprehensive study on how feedback protocols shape reliability and bias in LLM evaluations.
2
6
28
@sateeshk21
Sateesh Kumar
1 month
Which data is best for training few-shot imitation policies for robot manipulation? Some think it’s the data that looks similar, or has similar motion, or comes with related language labels. They are all right AND wrong: depending on the task, sometimes this similarity helps but
1
4
11
@drfeifei
Fei-Fei Li
2 months
(1/N) How close are we to enabling robots to solve the long-horizon, complex tasks that matter in everyday life? 🚨 We are thrilled to invite you to join the 1st BEHAVIOR Challenge @NeurIPS 2025, submission deadline: 11/15. πŸ† Prizes: πŸ₯‡ $1,000 πŸ₯ˆ $500 πŸ₯‰ $300
40
283
1K
@ArpitBahety
Arpit Bahety
4 months
A huge thanks to my amazing collaborators Arnav Balaji @babbatem @RobobertoMM! Work done at @RobInRoboticsUT at UT Austin @texas_robotics @UTAustin
0
0
3
@ArpitBahety
Arpit Bahety
4 months
For more details about SafeMimic and our experimental evaluation, also check out our paper and website. Paper: https://t.co/RfEQZomZGa Website:
1
0
3
@ArpitBahety
Arpit Bahety
4 months
We evaluate SafeMimic on 7 challenging tasks involving multi-step navigation and manipulation, articulated object manipulation (like opening oven, drawers) and contact-rich manipulation (like in the erase whiteboard task).
1
0
3
@ArpitBahety
Arpit Bahety
4 months
Here is SafeMimic in action for a task of loading a can in the oven. SafeMimic first extracts the semantic segments and the actions from the video. It then adapts the action to the robot’s embodiment in a safe and autonomous manner.
1
0
3
@ArpitBahety
Arpit Bahety
4 months
To enable the robot to act safely in the real world, we use an ensemble of safety Q-functions. Training them in the real world would be dangerous, as the robot needs to actually experience unsafe actions to learn. So, we train the Safety Q-functions in simulation.
1
0
3
@ArpitBahety
Arpit Bahety
4 months
SafeMimic first extracts the semantic segments, β€œwhat” and the human actions, β€œhow” from a single third person human video. Next, it safely and autonomously adapts the human actions to the robot's embodiment. Finally, it uses a policy memory to learn from successful trajectories.
1
0
3
@ArpitBahety
Arpit Bahety
4 months
We present SafeMimic, a framework to learn new multi-step mobile manipulation tasks safely and autonomously from a single third-person human video.
1
0
3
@ArpitBahety
Arpit Bahety
4 months
Imagine a future where robots are part of our daily lives β€” How can end users teach robots new tasks by directly showing them, just like teaching another person? πŸ§΅πŸ‘‡
3
17
44
@rkjenamani
Rajat Kumar Jenamani
4 months
Most assistive robots live in labs. We want to change that. FEAST enables care recipients to personalize mealtime assistance in-the-wild, with minimal researcher intervention across diverse in-home scenarios. πŸ† Outstanding Paper & Systems Paper Finalist @RoboticsSciSys 🧡1/8
5
69
326
@JiahengHu1
Jiaheng Hu
1 year
πŸš€ Despite efforts to scale up Behavior Cloning for Robots, large-scale BC has yet to live up to its promise. How can we break through the performance plateau? Introducing πŸ”₯FLaRe: fine-tuning large-scale robot policies with Reinforcement Learning. https://t.co/iRC1NTgoFI 🧡
1
37
122
@rutavms
Rutav
1 year
πŸ€– Want your robot to grab you a drink from the kitchen downstairs? πŸš€ Introducing BUMBLE: a framework to solve building-wide mobile manipulation tasks by harnessing the power of Vision-Language Models (VLMs). πŸ‘‡ (1/5) 🌐 https://t.co/61eev1Jyvw
7
38
174
@ArpitBahety
Arpit Bahety
1 year
ScrewMimic received the Outstanding Student Paper Finalist award at #RSS2024. Congratulations to my co-authors @RobobertoMM @PrnkMandikal @babbatem! πŸ₯³
3
1
56
@ArpitBahety
Arpit Bahety
1 year
Excited to present ScrewMimic at RSS tomorrow (July 17, Wednesday)! Please check out the talk at 4 pm and the poster all day long in Senaatszaal.
1
3
30
@ShivinDass
Shivin Dass
1 year
Against all the network delays and spotty conference wifi, I'm glad we could make this demo happen πŸ˜€ Checkout TeleMoMa for more details:
Tweet card summary image
robin-lab.cs.utexas.edu
TeleMoMa: A Modular and Versatile Teleoperation System for Mobile Manipulation
@chris_j_paxton
Chris Paxton
1 year
Prof Roberto martin-martin giving a demo from Yokohama, teleoperating a robot from his laptop during a talk. The robot is at his lab in austin
0
3
15
@ArpitBahety
Arpit Bahety
1 year
We are excited to present ScrewMimic at ICRA 2024 on May 13 (Monday)! Please check out the spotlight talk (10 am) and our poster at the "Future Roadmap for Manipulation Skills" Workshop and our poster at the "Bimanual Manipulation" Workshop.
0
3
20
@ArpitBahety
Arpit Bahety
1 year
A huge thanks to my amazing collaborators @PrnkMandikal @babbatem @RobobertoMM! Work done at @RobInRoboticsUT at UT Austin @texas_robotics @UTAustin
0
0
1