huihan_liu Profile Banner
Huihan Liu Profile
Huihan Liu

@huihan_liu

Followers
3K
Following
889
Media
29
Statuses
257

PhD @UTAustin | πŸ‘©πŸ»-in-the-Loop Learning for πŸ€– | prev @AIatMeta @MSFTResearch @berkeley_ai | undergrad @UCBerkeley 🐻

Austin, TX
Joined November 2020
Don't wanna be here? Send us removal request.
@huihan_liu
Huihan Liu
4 months
Meet CasperπŸ‘», a friendly robot sidekick who shadows your day, decodes your intents on the fly, and lends a hand while you stay in control! Instead of passively receiving commands, what if a robot actively sense what you need in the background, and step in when confident? (1/n)
6
40
159
@huihan_liu
Huihan Liu
9 days
Congratulations @mangahomanga !! Incredible opportunity for aspiring students interested in robot learning, looking forward to your amazing work!!
@mangahomanga
Homanga Bharadhwaj
9 days
I'll be joining the faculty @JohnsHopkins late next year as a tenure-track assistant professor in @JHUCompSci Looking for PhD students to join me tackling fun problems in robot manipulation, learning from human data, understanding+predicting physical interactions, and beyond!
1
2
18
@huihan_liu
Huihan Liu
1 month
Check out MimicDroid - helping humanoid robots adapt to new tasks from human videos with in-context learning!
@rutavms
Rutav
1 month
Intelligent humanoids should have the ability to quickly adapt to new tasks by observing humans Why is such adaptability important? 🌍 Real-world diversity is hard to fully capture in advance 🧠 Adaptability is central to natural intelligence We present MimicDroid πŸ‘‡ 🌐
0
3
24
@huihan_liu
Huihan Liu
3 months
Excited that Casper πŸ‘» is accepted to CoRL 2025! #CoRL2025 A big thank you to all the collaborators :)
@huihan_liu
Huihan Liu
4 months
Meet CasperπŸ‘», a friendly robot sidekick who shadows your day, decodes your intents on the fly, and lends a hand while you stay in control! Instead of passively receiving commands, what if a robot actively sense what you need in the background, and step in when confident? (1/n)
3
2
81
@siqi_shang
Siqi Shang
4 months
3D print tactile sensors anywhere inside your fin-ray fingers! We present FORTE - a solution to sensorize compliant fingers from inside with high resolution force and slip sensing. 🌐 https://t.co/S9WSnE5YJG With precise and responsive tactile feedback, FORTE can gently handle
3
36
153
@agiachris
Christopher Agia
4 months
What makes data β€œgood” for robot learning? We argue: it’s the data that drives closed-loop policy success! Introducing CUPID πŸ’˜, a method that curates demonstrations not by "quality" or appearance, but by how they influence policy behavior, using influence functions. (1/6)
6
24
144
@huihan_liu
Huihan Liu
4 months
πŸ“’ Our #RSS2025 workshop on OOD generation in robotics is happening live now! πŸ“EEB 132 Join us with a superb lineup of invited speakers and panelists: @lschmidt3 @DorsaSadigh @andrea_bajcsy @HarryXu12 @MashaItkina @Majumdar_Ani @KarlPertsch
@RohanSinhaSU
Rohan Sinha
6 months
πŸ“’ Excited for the second workshop on Out-of-Distribution Generalization in Robotics: Towards Reliable Learning-based Autonomy at RSS! #RSS2025 🎯 How can we build reliable robotic autonomy for the real world? πŸ“… Short papers due 05/25/25 🌐 https://t.co/wv8vqOEGk3 🧡(1/4)
0
2
14
@ArthurKZhang
Arthur King Zhang
4 months
Interested in deploying real robots in open-world, outdoor environments? Come to our presentation this Tuesday at 9:30AM, poster #12 @USC to learn how we master outdoor navigation with internet scale data and human-in-the-loop feedback! #RSS2025 @RoboticsSciSys
@ArthurKZhang
Arthur King Zhang
5 months
πŸ—ΊοΈ Scalable mapless navigation demands open-world generalization. Meet CREStE: our SOTA navigation model that nails path planning in novel scenes with just 3 hours of data, navigating 2 Km with just 1 human intervention. Project Page 🌐: https://t.co/ZX4g47Pmiv A thread 🧡
0
3
9
@DoubleHan07
Han Zhang
4 months
Excited to present DOGlove at #RSS2025 today! We’ve brought the glove with us, come by and try it out! πŸ“Œ Poster: All day at #54 (Associates Park) 🎀 Spotlight talk: 2:00–3:00pm (Bovard Auditorium)
0
4
41
@arjun__gupta
Arjun Gupta
4 months
How can we build mobile manipulation systems that generalize to novel objects and environments? Come check out MOSART at #RSS2025! Paper: https://t.co/s60i7c5nhp Project webpage: https://t.co/i2wphF9Ehl Code: https://t.co/YeKL8fM8FM
0
11
40
@huihan_liu
Huihan Liu
4 months
RSS Pioneer poster happening live on grass @USC!! πŸ˜›πŸ˜›come to Associate Park, poster #8 to chat more about continual robot learning, human-in-the-loop, and reliable deployment! #RSS2025
@huihan_liu
Huihan Liu
6 months
Honored to be part of the RSS Pioneers 2025 cohort! Looking forward to meeting everyone @RoboticsSciSys in LA this year!
0
5
68
@huihan_liu
Huihan Liu
4 months
Workshop on Mobile Manipulation in #RSS2025 happening now!! Join us at Hughes Aircraft Electrical Engineering Center, Room 132 if you’re here in person, or join us on Zoom. Website: https://t.co/RL3eYKTUd6
0
1
17
@rkjenamani
Rajat Kumar Jenamani
4 months
Most assistive robots live in labs. We want to change that. FEAST enables care recipients to personalize mealtime assistance in-the-wild, with minimal researcher intervention across diverse in-home scenarios. πŸ† Outstanding Paper & Systems Paper Finalist @RoboticsSciSys 🧡1/8
5
69
326
@huihan_liu
Huihan Liu
4 months
Checkout our paper and website for more details. A huge thank you to the team @rutavms @dafeijing Jack Pittenger @kiwi_sherbet @YuchenCui1 @ybisk @RobobertoMM @yukez ! @texas_robotics @UCLAComSci @CSDatCMU
0
0
3
@huihan_liu
Huihan Liu
4 months
πŸ”‘Key insight from the user studies: VLM-based commonsense reasoning is crucial for diverse intent inference in real-world assistive tasks. Casper consistently outperforms the baselines on user workload and user satisfaction, as well as task performance metrics. (7/n)
1
0
4
@huihan_liu
Huihan Liu
4 months
πŸ™‹πŸ»β€β™€οΈWe conduct extensive user studies on multi-step mobile manipulation tasks. At each step, the robot disambiguates user intent among multiple plausible goals, selecting the correct one based on user inputs and visual context. (6/n)
1
0
3
@huihan_liu
Huihan Liu
4 months
Casper's key idea #2: Use a parameterized skill library to fulfill intents. Once confirmed by the user, Casper executes the corresponding skill with estimated parameters. (5/n)
1
0
5
@huihan_liu
Huihan Liu
4 months
Casper's key idea #1: Use VLM commonsense reasoning to infer diverse human intents. Casper generates task candidates from observations and infers intent from user inputs among the task candidates, repeating until predictions are self-consistent. (4/n)
1
0
4
@huihan_liu
Huihan Liu
4 months
Given user teleoperation input, Casper predicts user intent in real time. Upon user confirmation, it fulfills the intent with autonomous execution. Casper's background reasoning runs in parallel with foreground human control to minimize user disruption. (3/n)
1
0
6
@huihan_liu
Huihan Liu
4 months
πŸ“„paper: https://t.co/4frz5YaB7G 🌐website: https://t.co/cJDUr1Q6uv (2/n)
1
0
7