
Danny Driess
@DannyDriess
Followers
4K
Following
198
Media
39
Statuses
147
Research Scientist @physical_int. Formerly Google DeepMind
Joined August 2021
How to build vision-language-action models that train fast, run fast & generalize? In our new paper, we formalize & analyze the approach of our π-0.5 model & further improve it with a single stage recipe. Blog: Paper:
6
24
223
RT @RussTedrake: TRI's latest Large Behavior Model (LBM) paper landed on arxiv last night! Check out our project website: .
0
107
0
Had a blast on the Unsupervised Learning Podcast with @hausman_k!.We covered the past, present, and future of robot learning 🤖.Big thanks to @jacobeffron for being a fantastic host!.
New Unsupervised Learning with @hausman_k & @DannyDriess (@physical_int) on building generalist robotics foundation models and: . - What’s next in AI x robotics. - Biggest outstanding questions. - How they 10x’d model training speed. - Open sourcing π 0 . - Breakthroughs
1
0
29
It was a really fun project with the amazing team @physical_int including @brian_ichter, Jost Tobias Springenberg, @liliyu_lili, Adrian Li-Bell, @KarlPertsch, @allenzren, @HomerWalke, @QuanVng, @lucy_x_shi, @slevine.
0
0
3
Check out our new work where we dissect various aspects of chain-of-thought at both training and inference time) for robotics!.Awesome work led by @verityw_.
Embodied chain-of-thought reasoning (ECoT) is a powerful way to improve robot generalization & performance. But why is this the case, and how can that inform the design of learned robot policies?.We investigate these questions in our latest work!.1/6
0
0
13
We auto-encode point tracks to automatically evaluate motion realism in generative video models. By inherently focusing on motion, our new metric (TRAJAN) correlates much better with human judgments of these models than appearance based metrics.
Humans can tell the difference between a realistic generated video and an unrealistic one – can models?. Excited to share TRAJAN: the world’s first point TRAJectory AutoeNcoder for evaluating motion realism in generated and corrupted videos. 🌐 🧵
0
0
7
Scaling data diversity, transfer between data sources, and a good training recipe were the main ingredients to allow robots to generalize to new homes!.
We got a robot to clean up homes that were never seen in its training data! Our new model, π-0.5, aims to tackle open-world generalization. We took our robot into homes that were not in the training data and asked it to clean kitchens and bedrooms. More below⤵️
0
0
40
In particular, the diverse robot in-the-wild helps a lot, even though this data is from static robots, and we evaluate on mobile manipulator tasks. Check out more in the blog post and paper
pi.website
Our latest generalist policy, π0.5, extends π0 and enables open-world generalization. Our new model can control a mobile manipulator to clean up an entirely new kitchen or bedroom.
0
0
4