Rutav
@rutavms
Followers
496
Following
775
Media
10
Statuses
178
🤖 Want your robot to grab you a drink from the kitchen downstairs? 🚀 Introducing BUMBLE: a framework to solve building-wide mobile manipulation tasks by harnessing the power of Vision-Language Models (VLMs). 👇 (1/5) 🌐 https://t.co/61eev1Jyvw
7
38
174
Congratulations @mangahomanga! Incredible opportunity for students interested in robot learning and manipulation
I'll be joining the faculty @JohnsHopkins late next year as a tenure-track assistant professor in @JHUCompSci Looking for PhD students to join me tackling fun problems in robot manipulation, learning from human data, understanding+predicting physical interactions, and beyond!
0
0
4
🤖What if a robot could perform a new task just from a natural language command, with zero demonstrations? Our new work, NovaFlow, makes it possible! We use pre-trained video generative model to create a video of the task, then translate it into a plan for real-world robot
16
85
538
Which data is best for training few-shot imitation policies for robot manipulation? Some think it’s the data that looks similar, or has similar motion, or comes with related language labels. They are all right AND wrong: depending on the task, sometimes this similarity helps but
1
4
11
I am presenting COLLAGE 🎨 at @corl_conf today. Spotlight presentation: 3:30 pm Poster: 4:30 - 6:00 pm. Poster #41. COLLAGE 🎨 is a data curation approach that automatically combines data subsets selected using different metrics, by weighting each subset based on its relevance
Which data is best for training few-shot imitation policies for robot manipulation? Some think it’s the data that looks similar, or has similar motion, or comes with related language labels. They are all right AND wrong: depending on the task, sometimes this similarity helps but
0
1
1
Learning from humans will be very useful for making humanoids capable! @DvijKalaria's recent work DreamControl takes a step towards it
❓How can humanoids learn to squat and open a drawer? Reward-tuning for every such whole-body task is infeasible. 🚀Meet DreamControl: robots "dream" how people move and manipulate objects in varied scenarios, practice using them in simulation, and then act naturally in the
0
0
8
See our paper + website for details: 📄 https://t.co/HepueInO9X 🌐 https://t.co/MTkKGvNzUE I would like to thank my collaborators, @dafeijing, Qi Wang, @SteveTod1998, @sateeshk21, @kiwi_sherbet, @RobobertoMM, @yukez, and the people at RPL and RobIn Lab @texas_robotics
arxiv.org
We aim to enable humanoid robots to efficiently solve new manipulation tasks from a few video examples. In-context learning (ICL) is a promising framework for achieving this goal due to its...
0
0
9
In-context learning allows fast and data-efficient learning. How do we enable humanoids to do it? We propose, 1. Collect human play videos—cheaper and faster than teleop data 2. Meta-train for learning to learn in-context 3. Deploy directly on humanoids, no teleop data needed
1
0
7
Intelligent humanoids should have the ability to quickly adapt to new tasks by observing humans Why is such adaptability important? 🌍 Real-world diversity is hard to fully capture in advance 🧠 Adaptability is central to natural intelligence We present MimicDroid 👇 🌐
7
40
121
We’re hiring a postdoc at NTU Singapore through the fellowship opportunity. This is a collaborative project on epistemic robot learning with my colleague at NTU, Alan Siu Lun Chau (@Chau9991), who specializes in statistical machine learning. Further details about the project
lnkd.in
This link will take you to a page that’s not on LinkedIn
1
3
18
Opensourcing a useful tool to calibrate camera extrinsics painlessly in a minute, no checkerboards! It's based on EasyHEC, using differentiable rendering to optimize extrinsics given object meshes+poses. Crazy that even a piece of paper works too. Code: https://t.co/CSmD2iIXuK
6
42
244
Honored to give an Early Career Invited Talk at #IJCAI today. See you at 11:30am in room 520C!
1
3
21
I’ll be at #ICML2025 presenting our paper, “Proto Successor Measure: Representing the Behavior Space of an RL Agent”. Excited to connect with others working on unsupervised RL and RL more broadly. Also am on the lookout for research collaborations and opportunities in industries.
0
4
56
Digital twins for personalized healthcare and sports. Excited to see future updates from @MyolabAI — congrats @vikashplus and team!
0
1
7
Imagine a future where robots are part of our daily lives — How can end users teach robots new tasks by directly showing them, just like teaching another person? 🧵👇
3
17
44
Empowering care recipients with personalized mealtime assistance from robots. Congrats on the Outstanding & Systems Paper finalist — rooting for you! 👏🤖🏡
Most assistive robots live in labs. We want to change that. FEAST enables care recipients to personalize mealtime assistance in-the-wild, with minimal researcher intervention across diverse in-home scenarios. 🏆 Outstanding Paper & Systems Paper Finalist @RoboticsSciSys 🧵1/8
1
0
7
Casper👻 helps you teleoperate smarter — easing workload while keeping you in command, powered by the "common sense" in vision-language models. Such teleop systems will enable better robot assistants as well as a data collection system.
Meet Casper👻, a friendly robot sidekick who shadows your day, decodes your intents on the fly, and lends a hand while you stay in control! Instead of passively receiving commands, what if a robot actively sense what you need in the background, and step in when confident? (1/n)
0
0
10
Real-world RL, where robots learn directly from physical interactions, is extremely challenging — especially for high-DoF systems like mobile manipulators. 1⃣ Long-horizon tasks and large action spaces lead to difficult policy optimization. 2⃣ Real-world exploration with
5
54
296