Rutav Profile
Rutav

@rutavms

Followers
496
Following
775
Media
10
Statuses
178

🤖🧠 Ph.D. @UTCompSci

Joined June 2019
Don't wanna be here? Send us removal request.
@rutavms
Rutav
1 year
🤖 Want your robot to grab you a drink from the kitchen downstairs? 🚀 Introducing BUMBLE: a framework to solve building-wide mobile manipulation tasks by harnessing the power of Vision-Language Models (VLMs). 👇 (1/5) 🌐 https://t.co/61eev1Jyvw
7
38
174
@cuijiaxun
Jiaxun Cui 🐿️
4 days
Meta has gone crazy on the squid game! Many new PhD NGs are deactivated today (I am also impacted🥲 happy to chat)
@tydsh
Yuandong Tian
4 days
Several of my team members + myself are impacted by this layoff today. Welcome to connect :)
112
94
2K
@rutavms
Rutav
6 days
Congratulations @mangahomanga! Incredible opportunity for students interested in robot learning and manipulation
@mangahomanga
Homanga Bharadhwaj
6 days
I'll be joining the faculty @JohnsHopkins late next year as a tenure-track assistant professor in @JHUCompSci Looking for PhD students to join me tackling fun problems in robot manipulation, learning from human data, understanding+predicting physical interactions, and beyond!
0
0
4
@Hongyu_Lii
Hongyu Li @ ICCV
16 days
🤖What if a robot could perform a new task just from a natural language command, with zero demonstrations? Our new work, NovaFlow, makes it possible! We use pre-trained video generative model to create a video of the task, then translate it into a plan for real-world robot
16
85
538
@sateeshk21
Sateesh Kumar
1 month
Which data is best for training few-shot imitation policies for robot manipulation? Some think it’s the data that looks similar, or has similar motion, or comes with related language labels. They are all right AND wrong: depending on the task, sometimes this similarity helps but
1
4
11
@sateeshk21
Sateesh Kumar
28 days
I am presenting COLLAGE 🎨 at @corl_conf today. Spotlight presentation: 3:30 pm Poster: 4:30 - 6:00 pm. Poster #41. COLLAGE 🎨 is a data curation approach that automatically combines data subsets selected using different metrics, by weighting each subset based on its relevance
@sateeshk21
Sateesh Kumar
1 month
Which data is best for training few-shot imitation policies for robot manipulation? Some think it’s the data that looks similar, or has similar motion, or comes with related language labels. They are all right AND wrong: depending on the task, sometimes this similarity helps but
0
1
1
@SkildAI
Skild AI
1 month
Chainsaw vs. Robot.
9
10
209
@rutavms
Rutav
1 month
Learning from humans will be very useful for making humanoids capable! @DvijKalaria's recent work DreamControl takes a step towards it
@DvijKalaria
Dvij Kalaria
1 month
❓How can humanoids learn to squat and open a drawer? Reward-tuning for every such whole-body task is infeasible. 🚀Meet DreamControl: robots "dream" how people move and manipulate objects in varied scenarios, practice using them in simulation, and then act naturally in the
0
0
8
@rutavms
Rutav
1 month
In-context learning allows fast and data-efficient learning. How do we enable humanoids to do it? We propose, 1. Collect human play videos—cheaper and faster than teleop data 2. Meta-train for learning to learn in-context 3. Deploy directly on humanoids, no teleop data needed
1
0
7
@rutavms
Rutav
1 month
Intelligent humanoids should have the ability to quickly adapt to new tasks by observing humans Why is such adaptability important? 🌍 Real-world diversity is hard to fully capture in advance 🧠 Adaptability is central to natural intelligence We present MimicDroid 👇 🌐
7
40
121
@yoonchangsung
Yoonchang Sung
2 months
We’re hiring a postdoc at NTU Singapore through the fellowship opportunity. This is a collaborative project on epistemic robot learning with my colleague at NTU, Alan Siu Lun Chau (@Chau9991), who specializes in statistical machine learning. Further details about the project
lnkd.in
This link will take you to a page that’s not on LinkedIn
1
3
18
@Stone_Tao
Stone Tao
2 months
Opensourcing a useful tool to calibrate camera extrinsics painlessly in a minute, no checkerboards! It's based on EasyHEC, using differentiable rendering to optimize extrinsics given object meshes+poses. Crazy that even a piece of paper works too. Code: https://t.co/CSmD2iIXuK
6
42
244
@RobobertoMM
Roberto
2 months
Honored to give an Early Career Invited Talk at #IJCAI today. See you at 11:30am in room 520C!
1
3
21
@shikharbahl
Shikhar Bahl
3 months
0
2
5
@agsidd10
Siddhant Agarwal
4 months
I’ll be at #ICML2025 presenting our paper, “Proto Successor Measure: Representing the Behavior Space of an RL Agent”. Excited to connect with others working on unsupervised RL and RL more broadly. Also am on the lookout for research collaborations and opportunities in industries.
0
4
56
@rutavms
Rutav
4 months
Digital twins for personalized healthcare and sports. Excited to see future updates from @MyolabAI — congrats @vikashplus and team!
@Vikashplus
Vikash Kumar
4 months
📢Life is a sequence of bets – and I’ve picked my next: @MyolabAI It’s incredibly ambitious, comes with high risk, & carries unbounded potential. But it’s a version of the #future I deeply believe in. I believe: ➡️AI will align strongly with humanity - coz it maximizes its own
0
1
7
@ArpitBahety
Arpit Bahety
4 months
Imagine a future where robots are part of our daily lives — How can end users teach robots new tasks by directly showing them, just like teaching another person? 🧵👇
3
17
44
@rutavms
Rutav
4 months
Empowering care recipients with personalized mealtime assistance from robots. Congrats on the Outstanding & Systems Paper finalist — rooting for you! 👏🤖🏡
@rkjenamani
Rajat Kumar Jenamani
4 months
Most assistive robots live in labs. We want to change that. FEAST enables care recipients to personalize mealtime assistance in-the-wild, with minimal researcher intervention across diverse in-home scenarios. 🏆 Outstanding Paper & Systems Paper Finalist @RoboticsSciSys 🧵1/8
1
0
7
@rutavms
Rutav
4 months
Casper👻 helps you teleoperate smarter — easing workload while keeping you in command, powered by the "common sense" in vision-language models. Such teleop systems will enable better robot assistants as well as a data collection system.
@huihan_liu
Huihan Liu
4 months
Meet Casper👻, a friendly robot sidekick who shadows your day, decodes your intents on the fly, and lends a hand while you stay in control! Instead of passively receiving commands, what if a robot actively sense what you need in the background, and step in when confident? (1/n)
0
0
10
@JiahengHu1
Jiaheng Hu
5 months
Real-world RL, where robots learn directly from physical interactions, is extremely challenging — especially for high-DoF systems like mobile manipulators. 1⃣ Long-horizon tasks and large action spaces lead to difficult policy optimization. 2⃣ Real-world exploration with
5
54
296