Huihan Liu
@huihan_liu
Followers
3K
Following
889
Media
29
Statuses
257
PhD @UTAustin | π©π»-in-the-Loop Learning for π€ | prev @AIatMeta @MSFTResearch @berkeley_ai | undergrad @UCBerkeley π»
Austin, TX
Joined November 2020
Meet Casperπ», a friendly robot sidekick who shadows your day, decodes your intents on the fly, and lends a hand while you stay in control! Instead of passively receiving commands, what if a robot actively sense what you need in the background, and step in when confident? (1/n)
6
40
159
Congratulations @mangahomanga !! Incredible opportunity for aspiring students interested in robot learning, looking forward to your amazing work!!
I'll be joining the faculty @JohnsHopkins late next year as a tenure-track assistant professor in @JHUCompSci Looking for PhD students to join me tackling fun problems in robot manipulation, learning from human data, understanding+predicting physical interactions, and beyond!
1
2
18
Check out MimicDroid - helping humanoid robots adapt to new tasks from human videos with in-context learning!
Intelligent humanoids should have the ability to quickly adapt to new tasks by observing humans Why is such adaptability important? π Real-world diversity is hard to fully capture in advance π§ Adaptability is central to natural intelligence We present MimicDroid π π
0
3
24
Excited that Casper π» is accepted to CoRL 2025! #CoRL2025 A big thank you to all the collaborators :)
Meet Casperπ», a friendly robot sidekick who shadows your day, decodes your intents on the fly, and lends a hand while you stay in control! Instead of passively receiving commands, what if a robot actively sense what you need in the background, and step in when confident? (1/n)
3
2
81
3D print tactile sensors anywhere inside your fin-ray fingers! We present FORTE - a solution to sensorize compliant fingers from inside with high resolution force and slip sensing. π https://t.co/S9WSnE5YJG With precise and responsive tactile feedback, FORTE can gently handle
3
36
153
What makes data βgoodβ for robot learning? We argue: itβs the data that drives closed-loop policy success! Introducing CUPID π, a method that curates demonstrations not by "quality" or appearance, but by how they influence policy behavior, using influence functions. (1/6)
6
24
144
π’ Our #RSS2025 workshop on OOD generation in robotics is happening live now! πEEB 132 Join us with a superb lineup of invited speakers and panelists: @lschmidt3 @DorsaSadigh @andrea_bajcsy @HarryXu12 @MashaItkina @Majumdar_Ani @KarlPertsch
π’ Excited for the second workshop on Out-of-Distribution Generalization in Robotics: Towards Reliable Learning-based Autonomy at RSS! #RSS2025 π― How can we build reliable robotic autonomy for the real world? π
Short papers due 05/25/25 π https://t.co/wv8vqOEGk3 π§΅(1/4)
0
2
14
Interested in deploying real robots in open-world, outdoor environments? Come to our presentation this Tuesday at 9:30AM, poster #12 @USC to learn how we master outdoor navigation with internet scale data and human-in-the-loop feedback! #RSS2025 @RoboticsSciSys
πΊοΈ Scalable mapless navigation demands open-world generalization. Meet CREStE: our SOTA navigation model that nails path planning in novel scenes with just 3 hours of data, navigating 2 Km with just 1 human intervention. Project Page π: https://t.co/ZX4g47Pmiv A thread π§΅
0
3
9
How can we build mobile manipulation systems that generalize to novel objects and environments? Come check out MOSART at #RSS2025! Paper: https://t.co/s60i7c5nhp Project webpage: https://t.co/i2wphF9Ehl Code: https://t.co/YeKL8fM8FM
0
11
40
RSS Pioneer poster happening live on grass @USC!! ππcome to Associate Park, poster #8 to chat more about continual robot learning, human-in-the-loop, and reliable deployment! #RSS2025
Honored to be part of the RSS Pioneers 2025 cohort! Looking forward to meeting everyone @RoboticsSciSys in LA this year!
0
5
68
Workshop on Mobile Manipulation in #RSS2025 happening now!! Join us at Hughes Aircraft Electrical Engineering Center, Room 132 if youβre here in person, or join us on Zoom. Website: https://t.co/RL3eYKTUd6
0
1
17
Most assistive robots live in labs. We want to change that. FEAST enables care recipients to personalize mealtime assistance in-the-wild, with minimal researcher intervention across diverse in-home scenarios. π Outstanding Paper & Systems Paper Finalist @RoboticsSciSys π§΅1/8
5
69
326
Checkout our paper and website for more details. A huge thank you to the team @rutavms @dafeijing Jack Pittenger @kiwi_sherbet @YuchenCui1 @ybisk @RobobertoMM @yukez ! @texas_robotics @UCLAComSci @CSDatCMU
0
0
3
πKey insight from the user studies: VLM-based commonsense reasoning is crucial for diverse intent inference in real-world assistive tasks. Casper consistently outperforms the baselines on user workload and user satisfaction, as well as task performance metrics. (7/n)
1
0
4
ππ»ββοΈWe conduct extensive user studies on multi-step mobile manipulation tasks. At each step, the robot disambiguates user intent among multiple plausible goals, selecting the correct one based on user inputs and visual context. (6/n)
1
0
3
Casper's key idea #2: Use a parameterized skill library to fulfill intents. Once confirmed by the user, Casper executes the corresponding skill with estimated parameters. (5/n)
1
0
5
Casper's key idea #1: Use VLM commonsense reasoning to infer diverse human intents. Casper generates task candidates from observations and infers intent from user inputs among the task candidates, repeating until predictions are self-consistent. (4/n)
1
0
4
Given user teleoperation input, Casper predicts user intent in real time. Upon user confirmation, it fulfills the intent with autonomous execution. Casper's background reasoning runs in parallel with foreground human control to minimize user disruption. (3/n)
1
0
6
πpaper: https://t.co/4frz5YaB7G πwebsite: https://t.co/cJDUr1Q6uv (2/n)
1
0
7