Peter Chen Profile
Peter Chen

@peterxichen

Followers
2,813
Following
1,178
Media
5
Statuses
124

Covariant CEO and Co-Founder. Previously @OpenAI , @UCBerkeley PhD.

Joined December 2017
Don't wanna be here? Send us removal request.
Explore trending content on Musk Viewer
@peterxichen
Peter Chen
3 months
At Covariant, we have been deploying robots to the real world and thinking hard about how to build truly AI for general purpose robots that can go beyond demos. For a long time, we are building up large robotics datasets through our deployments without the right model that can
9
11
145
@peterxichen
Peter Chen
6 years
Interesting paper from DeepMind: near SOTA density modeling performance on MNIST with only **single pass** through training data and online gradient descent
2
21
105
@peterxichen
Peter Chen
4 years
Excited to announce our Series B and join forces with @mavolpi @IndexVentures and @JordanJacobs10 @radicalvcfund to push the boundary of AI Robotics in the real world!!
@CovariantAI
Covariant
4 years
We’ve raised $40m in Series B funding led by @IndexVentures w/ AI-focused @Radicalvcfund + existing investor @AmplifyPartners . Grateful for the support of our investors, customers + partners as we continue to bring AI Robotics to the real world!
5
28
251
6
6
82
@peterxichen
Peter Chen
2 months
What separates lab robot demos and robots in production? Extremely high reliability. This requires our model to robustly handle many long-tail scenarios like the one in attached picture, where one stray barcode label can tank the 99.95% sortation accuracy requirements. (1/n)
Tweet media one
@pabbeel
Pieter Abbeel
2 months
resonates with my own experience: robot demo --> INSANE PAIN --> robot creating value in production luckily I am hopeful we have by now paid most of our dues at Covariant :)
2
13
203
1
9
62
@peterxichen
Peter Chen
2 months
Text foundation models (LLMs) have an incredible ability to adapt to new problems through in-context learning. We show that it’s possible for robots to learn in context as well, in our latest scaling update of RFM-1, Covariant’s robotics foundation model. (1/n)
@CovariantAI
Covariant
2 months
RFM-1's latest scaling update enables robots with in-context learning of grasping improvements. The video shows the self-reflective reasoning capability — after a few tries and failing, the robot has an internal dialogue, hypothesizing that its current gripper is not suited for
4
24
125
1
8
58
@peterxichen
Peter Chen
2 months
What makes the training data for RFM-1 unique? A few properties that are distinctive from typical lab data: 1. real-world complexity: picking from extremely cluttered scenes where item occlusion presents a challenge for reliability. 2. high-speed handling: the dynamics of
@CovariantAI
Covariant
3 months
This is not a one-off cycle. This is performance over repeated cycles – the new benchmark for reliable AI Robotic systems. High pick rates, navigating cluttered environments, no double-picks, scoops, or errors – just consistent flawless execution, parcel after parcel.
4
13
97
0
4
34
@peterxichen
Peter Chen
1 month
It was great to show @geoffreyhinton foundation models meeting robotics at Covariant HQ!
@CovariantAI
Covariant
1 month
Much of the groundbreaking advancements we've witnessed over the past decade, spanning from computer vision, speech recognition, protein folding prediction, and beyond, hinge on the deep learning work conducted by @geoffreyhinton , who has fundamentally changed the focus and
Tweet media one
Tweet media two
Tweet media three
Tweet media four
0
6
41
1
3
33
@peterxichen
Peter Chen
3 months
Congrats @chelseabfinn @hausman_k @svlevine on starting a new company. We need more people to work on solving the physical world data challenge and bring foundation models to robotics!
@chelseabfinn
Chelsea Finn
3 months
I’m really excited to be starting a new adventure with multiple amazing friends & colleagues. Our company is called Physical Intelligence (Pi or π, like the policy). A short thread 🧵
54
110
2K
0
0
25
@peterxichen
Peter Chen
4 years
Had a lot of fun diving deep into how AI Robots learn with @CadeMetz & @satariano
@CadeMetz
Cade Metz
4 years
A robot in Germany shows that machines can learn to do the job of a human (*learn* being the key word): (with the great @satariano )
1
21
68
0
4
24
@peterxichen
Peter Chen
2 months
It turned out that going from 90% success rate to 99.95% required significantly more data to cover diverse failure modes. This is why a foundation model approach to robotics, instead of building task/embodiment specific policy, so powerful because one single model can leverage
0
1
17
@peterxichen
Peter Chen
4 years
Generative models can drastically accelerate database systems! A new learning task was introduced: Range Density Estimation, for more details see thread 👇
@zongheng_yang
Zongheng Yang
4 years
Can self-supervised learning help computer systems? Our #ICML2020 paper equips autoregressive models to optimize databases. We introduce a new task, range density: estimate the prob. of variables in ranges. A super simple trick gives 10-100x gains. 👇 1/
1
28
151
1
0
15
@peterxichen
Peter Chen
3 months
With RFM-1 and its multimodal setup, we have the ability to learn from a large amount of robots’ interaction with the world: learning robust manipulation policies by looking at robot actions+outcome across millions of distinct items, learning an intuitive physical world model by
1
1
14
@peterxichen
Peter Chen
4 years
Thrilled to be working together!
@mavolpi
Mike Volpi
4 years
Robotics has been a challenging field for years, AI has changed the game and the time is now. Today, we are announcing our investment in @CovariantAI . Excited for our journey with @pabbeel and Peter Chen:
3
25
210
0
0
14
@peterxichen
Peter Chen
3 months
Thanks @alexgkendall -- we also love the work that @wayve_ai is doing to bring foundation models to autonomous driving. It's going to be an exciting year for robotics!
@alexgkendall
Alex Kendall
3 months
Exciting result showing how robust, accessible and trustworthy robotics is becoming with AI foundation models. And I'm sure lots more to come.. congratulations @pabbeel @peterxichen 🎉
2
3
50
1
1
13
@peterxichen
Peter Chen
3 months
Advances in open-source base LLMs and the increasing availability of large amount of image-text dataset alo mean that RFM-1 can fluently handle text tokens as input and output, which open up a lot of product possibilities on how people and robots can collaborate. (4/n)
1
1
12
@peterxichen
Peter Chen
2 months
Before we started Covariant, we were very encouraged by how well imitation learning from human demonstration can work. 30min of human teleop data can train policies that have 80-90% success rate. (2/n)
1
0
10
@peterxichen
Peter Chen
3 months
We have more exciting announcements coming up soon as we deploy RFM-1 to customers and continue to scale up data. In the meantime, take a look at our blog post. If pushing forward robotics foundation models, by going through the hard challenges of
1
0
9
@peterxichen
Peter Chen
3 months
What’s even more exciting is that there is shockingly little inductive bias that needs to be manually encoded, RFM-1 is trained in next-token prediction, which gives us confidence that it will scale well with data. See attached for image, action and video generations that come
1
0
9
@peterxichen
Peter Chen
2 months
This is an amazing effort to collect more robotics data. I especially love that they have both structured data like multi-view stereo and more modern modality like text annotations. The key gap in robotics is data and it’s great to see the progress. Congrats @SashaKhazatsky
@SashaKhazatsky
Alexander Khazatsky
2 months
After two years, it is my pleasure to introduce “DROID: A Large-Scale In-the-Wild Robot Manipulation Dataset” DROID is the most diverse robotic interaction dataset ever released, including 385 hours of data collected across 564 diverse scenes in real-world households and offices
5
77
302
0
0
8
@peterxichen
Peter Chen
3 months
Thanks @EdLudlow for having me - it's great to talk about how AI advances make bringing robots to the real world possible!
@EdLudlow
Ed Ludlow
3 months
Giving Robots the Ability to Reason Today we talked to @CovariantAI on @technology about their foundation model and why tackling the AI behind advanced robotics in isolation is a good strategy. Thanks for coming on the show @peterxichen
2
2
5
0
2
8
@peterxichen
Peter Chen
2 months
We are continuing to see intelligence of how to interact with the world emerge from pre-training a large multi-modal model on datasets that are cleaned and structured in exactly the right way. Join us () to build the largest real-world robotics dataset
1
0
5
@peterxichen
Peter Chen
2 months
The multi-modal sequence setup of RFM-1 means it can attend to a list of previous episodes of (input image, robot action, sensor readings that indicate outcome) to come up with an improved image->action policy on the fly. (2/n)
1
0
4
@peterxichen
Peter Chen
2 months
One more non-Covariant research mention: this type of ability to adapt policy in-context also has parallel in humanoid locomotion. See the amazing work by @ir413 casting humanoid locomotion as a next token prediction problem (similar to RFM-1): We just
0
0
4
@peterxichen
Peter Chen
2 months
Pre-training on millions of sequences of previous robot interactions implicitly teach RFM-1 rich knowledge on how to adapt in-context: if the grasping on exposed fabric failed and then grasping on paper label is successful, then the policy should avoid fabric; if a specific
1
0
5
@peterxichen
Peter Chen
3 months
1
0
2
@peterxichen
Peter Chen
2 months
@tonyzzhao @GoogleDeepMind Love the one continuous take demo! Congrats @tonyzzhao !
0
0
4
@peterxichen
Peter Chen
3 months
@Joe__Black__ We expect RFM-1 to power humanoid robots and different kinds of hands (like those with fingers) as well! We would need to collect more targeted data for those hardware form factors, which will become easier as they become more mature.
0
0
1
@peterxichen
Peter Chen
6 years
@goodfellow_ian Good point! "We made sure that the sets of writers of the training set and test set were disjoint." () so yeah this makes the evaluation less interpretable
0
0
2
@peterxichen
Peter Chen
2 months
1
0
1
@peterxichen
Peter Chen
6 years
@goodfellow_ian I think it's training on test set so it uses more data than other off-line methods. but it's not cheating as long as it only takes one pass through the data and doesn't evaluate NLL on any image that it has performed gradient descent on
2
0
2
@peterxichen
Peter Chen
4 years
@vmcheung @josh_tobin_ Congrats @vmcheung @josh_tobin_ - look forward to seeing what you build together!
0
0
2
@peterxichen
Peter Chen
4 years
0
0
1
@peterxichen
Peter Chen
3 years
@danfei_xu Congratulations!
1
0
1
@peterxichen
Peter Chen
4 years
@sdavidmiller Congratulations Stephen!!
0
0
1
@peterxichen
Peter Chen
6 years
@goodfellow_ian Have you done this experiment?
1
0
1