
Katherine Liu
@robo_kat
Followers
114
Following
388
Media
0
Statuses
95
Senior Research Scientist @ToyotaResearch, previously Robotics PhD @MIT_CSAIL. Excited about machine learning for embodied intelligence. Opinions my own!
Joined April 2018
I love robot videos, and there are some quite interesting ones to check out on our project. Excited to share the results of the teamโs work, and glad to have been a core contributor โ go check out the paper because there are a lot of interesting details!
TRI's latest Large Behavior Model (LBM) paper landed on arxiv last night! Check out our project website: https://t.co/n0qmDRivRH One of our main goals for this paper was to put out a very careful and thorough study on the topic to help people understand the state of the
0
0
7
Probably my favorite plot from the paper, which sums it all up, is this one. The plot compares performance using different amounts of pretraining data used before training a new task: 0% (aka single task), 25, 50, or 100% of TRIโs data, then 100% of TRIโs data + all of the
2
3
36
๐Thrilled to share what weโve been building at TRI over the past several months: our first Large Behavior Models (LBMs) are here! Iโm proud to have been a core contributor to the multi-task policy learning and post-training efforts. At TRI, weโve been researching how LBMs can
TRI's latest Large Behavior Model (LBM) paper landed on arxiv last night! Check out our project website: https://t.co/n0qmDRivRH One of our main goals for this paper was to put out a very careful and thorough study on the topic to help people understand the state of the
3
29
185
have been waiting for this release! Robotics needs rigorous and careful evaluation now more than ever ๐ฆพ
TRI's latest Large Behavior Model (LBM) paper landed on arxiv last night! Check out our project website: https://t.co/n0qmDRivRH One of our main goals for this paper was to put out a very careful and thorough study on the topic to help people understand the state of the
1
5
65
At @ToyotaResearch, we've been studying how LBMs can help robots learn faster and better. We built a rigorous evaluation pipeline to benchmark LBM performance with statistical confidence. Results suggest that pre-training on hundreds of tasks yields 80% data savings on new tasks.
TRI's latest Large Behavior Model (LBM) paper landed on arxiv last night! Check out our project website: https://t.co/n0qmDRivRH One of our main goals for this paper was to put out a very careful and thorough study on the topic to help people understand the state of the
1
1
24
Awesome paper on robot foundation models with super rigorous evaluation. Definitely a must-read!
TRI's latest Large Behavior Model (LBM) paper landed on arxiv last night! Check out our project website: https://t.co/n0qmDRivRH One of our main goals for this paper was to put out a very careful and thorough study on the topic to help people understand the state of the
0
1
42
Can we learn a 3D world model that predicts object dynamics directly from videos? Introducing Particle-Grid Neural Dynamics: a learning-based simulator for deformable objects that trains from real-world videos. Website: https://t.co/1PWPdVTFAk ArXiv: https://t.co/oSIBKtUTbk
4
34
168
**Steerability** remains one of the key issues for current vision-language-action models (VLAs). Natural language is often ambiguous and vague: "Hang a mug on a branch" vs "Hang the left mug on the right branch." Many works claim to handle language input, yet the tasks are often
๐ค Does VLA models really listen to language instructions? Maybe not ๐ ๐ Introducing our RSS paper: CodeDiffuser -- using VLM-generated code to bridge the gap between **high-level language** and **low-level visuomotor policy** ๐ฎ Try the live demo: https://t.co/sLlTIyFu19 (1/9)
0
24
132
How can we achieve both common sense understanding that can deal with varying levels of ambiguity in language and dextrous manipulation? Check out CodeDiffuser, a really neat work that bridges Code Gen with a 3D Diffusion Policy! This was a fun project with cool experiments! ๐ค
๐ค Does VLA models really listen to language instructions? Maybe not ๐ ๐ Introducing our RSS paper: CodeDiffuser -- using VLM-generated code to bridge the gap between **high-level language** and **low-level visuomotor policy** ๐ฎ Try the live demo: https://t.co/sLlTIyFu19 (1/9)
1
7
12
We recently launched https://t.co/mshIJSnIYu as a community-driven effort to pool UMI-related data together. ๐ฆพ If you are using a UMI-like system, please consider adding your data here. ๐คฉ๐ค No dataset is too small; small data WILL add up!๐
4
42
248
๐
I'm super excited to start a great new collaboration with the fantastic team at Boston Dynamics. Scott Kuindersma and I chatted with Evan Ackerman about it earlier today. https://t.co/LjyldS6HBS
0
0
1
With grad school admissions being open now, I'd like to re-share our list with all the awesome faculty at MIT that work in #AI and #ClimateChange
https://t.co/QW0JNpjKhk
@ClimateChangeAI @MIT @eapsMIT @MITCSConsortium @priyald17 @sarameghanbeery
github.com
๐ A curated list of MIT faculty that tackle climate change with machine learning for applying students, undergraduates, or others - GitHub - blutjens/awesome-MIT-ai-for-climate-change: ๐ A curate...
0
3
16
Check out our #ECCV2024 paper on "Zero-Shot Multi-Object Scene Completion"! Drop by poster 313 this morning if you're interested! Website: https://t.co/iMYwLdqVpi Code&Dataset: https://t.co/d6hjNrYYIL
0
5
25
Read this thread! Then go read the paper ๐
Evaluation in robot learning papers, or, please stop using only success rate a paper and a ๐งต https://t.co/3cUad03GUl
0
0
3
๐ฅ Speakers Announced! ๐ฅ The #WiCV workshop at @eccvconf is thrilled to announce our amazing speakers: @dimadamen, @ftm_guney, Stella Yu, Hedvig Kjellstrรถm, and dinner speaker @HildeKuehne! ๐ค Join us on Monday 2pm-6pm ๐ฎ๐น#ECCV2024! Full program here: https://t.co/AWTk18jXqf
0
13
69
We are grateful to be awarded an oral presentation -- please come by Wed 10/2 at 1:30pm (I believe we are the first talk in the oral session) as well as the poster session afterward (number 156) at 4:30pm! #ECCV2024 ๐
Excited to share our new paper on large-angle monocular dynamic novel view synthesis! Given a single RGB video, we propose a method that can imagine what that scene would look like from any other viewpoint. Website: https://t.co/uhY9NdWAPt Paper: https://t.co/beb3W8ojOr ๐งต(1/5)
3
3
27
My group will be seeking new PhD students in the coming cycle! The best way to reach us is to apply to the @MITEECS PhD program. I look for independent, creative, interactive, supportive, passionate, bright students who want to work on fundamental problems with a geometry flavor.
4
38
247
What is a really really hard problem to work on in #AI? My own answer is Spatial Intelligence - a technology that could empower and enable countless possible use cases in creation, design, learning, AR/VR, robotics, and beyond. Itโs a real honor that my cofounders @jcjohnss
Hello, world! We are World Labs, a spatial intelligence company building Large World Models (LWMs) to perceive, generate, and interact with the 3D world. Read more: https://t.co/El9rgi6bxQ
124
384
2K
Do you actually know how well your policy works? Excited to have folks like @imp_aa and co. pushing on this front.
Check out our open-source STATS package https://t.co/alpkMQtJER if you are a roboticist tasked with quantifying policy performance with success/failure labels, and are wondering how to get the tightest confidence interval estimates out of a small set of policy rollouts.
0
0
1
๐Introducing DPPO, Diffusion Policy Policy Optimization DPPO optimizes pre-trained Diffusion Policy using policy gradient from RL, showing ๐๐๐ฟ๐ฝ๐ฟ๐ถ๐๐ถ๐ป๐ด ๐ถ๐บ๐ฝ๐ฟ๐ผ๐๐ฒ๐บ๐ฒ๐ป๐๐ over a variety of baselines across benchmarks and sim2real transfer https://t.co/uOkkcnVFCf
5
93
476