Zipeng Fu Profile Banner
Zipeng Fu Profile
Zipeng Fu

@zipengfu

Followers
11,772
Following
1,204
Media
36
Statuses
272

Stanford AI & Robotics PhD @StanfordAILab | Creator of Mobile ALOHA, Robot Parkour | Past: Google DeepMind, CMU, UCLA

Palo Alto, CA
Joined February 2014
Don't wanna be here? Send us removal request.
Explore trending content on Musk Viewer
Pinned Tweet
@zipengfu
Zipeng Fu
5 months
Mobile ALOHA's hardware is very capable. We brought it home yesterday and tried more tasks! It can: - do laundry👔👖 - self-charge⚡️ - use a vacuum - water plants🌳 - load and unload a dishwasher - use a coffee machine☕️ - obtain drinks from the fridge and open a beer🍺 - open
407
2K
7K
@zipengfu
Zipeng Fu
5 months
Introduce 𝐌𝐨𝐛𝐢𝐥𝐞 𝐀𝐋𝐎𝐇𝐀🏄 -- Learning! With 50 demos, our robot can autonomously complete complex mobile manipulation tasks: - cook and serve shrimp🦐 - call and take elevator🛗 - store a 3Ibs pot to a two-door cabinet Open-sourced! Co-led @tonyzzhao , @chelseabfinn
188
891
4K
@zipengfu
Zipeng Fu
8 months
Introduce our #CoRL2023 (Oral) project: "Robot Parkour Learning" Using vision, our robots can climb over high obstacles, leap over large gaps, crawl beneath low barriers, squeeze through thin slits, and run. All done by one neural network running onboard. And it's open-source!
24
229
1K
@zipengfu
Zipeng Fu
5 months
Our robot can consistently handle these tasks, succeeding: - 9 times in a row for Wipe Wine - 5 times for Call Elevator - robust against distractors for Use Cabinet - extrapolate to chairs unseen during training
9
45
387
@zipengfu
Zipeng Fu
5 months
Mobile ALOHA 🏄 is coming soon! Special thanks to @tonyzzhao for throwing random objects into the scene, and @chelseabfinn for the heavy pot (> 3 lbs) ! Stay tuned!
9
58
380
@zipengfu
Zipeng Fu
5 months
We open-source all the software and data of Mobile ALOHA! Project Website 🛜: Code for Imitation Learning 🖥️: Data 📊:
Tweet media one
2
35
303
@zipengfu
Zipeng Fu
5 months
The robot is teleoperated in the video (for now!) Checkout @tonyzzhao 's thread on how we design the low-cost open-source hardware and teleoperation system!
@tonyzzhao
Tony Z. Zhao
5 months
Introducing 𝐌𝐨𝐛𝐢𝐥𝐞 𝐀𝐋𝐎𝐇𝐀🏄 -- Hardware! A low-cost, open-source, mobile manipulator. One of the most high-effort projects in my past 5yrs! Not possible without co-lead @zipengfu and @chelseabfinn . At the end, what's better than cooking yourself a meal with the 🤖🧑‍🍳
236
1K
5K
15
37
257
@zipengfu
Zipeng Fu
2 years
Super excited to announce that I will join @StanfordAILab for my PhD as a Stanford Graduate Fellow to keep exploring in Robotics & AI. Deeply grateful to my advisor @pathak2206 , mentors Jitendra Malik & @ashishkr9311 for their huge support and guidance! Will miss @CarnegieMellon
16
7
236
@zipengfu
Zipeng Fu
5 months
Good time spending days and nights with @tonyzzhao at the brand new Stanford Robotics Center and @chelseabfinn 's lab. Much funnier when sound on!
@tonyzzhao
Tony Z. Zhao
5 months
Robots are not ready to take over the world yet! @zipengfu and I just compiled a video of the dumbest mistakes 𝐌𝐨𝐛𝐢𝐥𝐞 𝐀𝐋𝐎𝐇𝐀🏄 made in the autonomous mode 🤣 We are also planning to organize some live demos after taking a break. Stay tuned!
66
222
1K
19
30
222
@zipengfu
Zipeng Fu
5 months
How do we achieve this with only 50 demos? The key is to co-train imitation learning algorithms with static ALOHA data. We found this to consistently improve performance, especially for tasks that require precise manipulation.
Tweet media one
2
16
200
@zipengfu
Zipeng Fu
5 months
Want to dive deeper into the hardware of Mobile ALOHA? Check out 𝐌𝐨𝐛𝐢𝐥𝐞 𝐀𝐋𝐎𝐇𝐀🏄 -- Hardware from co-lead @tonyzzhao !
@tonyzzhao
Tony Z. Zhao
5 months
Introducing 𝐌𝐨𝐛𝐢𝐥𝐞 𝐀𝐋𝐎𝐇𝐀🏄 -- Hardware! A low-cost, open-source, mobile manipulator. One of the most high-effort projects in my past 5yrs! Not possible without co-lead @zipengfu and @chelseabfinn . At the end, what's better than cooking yourself a meal with the 🤖🧑‍🍳
236
1K
5K
10
18
178
@zipengfu
Zipeng Fu
5 months
Our video is clearly inspired by the iconic PR1 video, showing "the hardware is capable, what we need is better AI to make the robot smart enough to do things on its own" (quote @pabbeel ) We present our first steps in bridging gaps btw teleop & autonomy:
@zipengfu
Zipeng Fu
5 months
Introduce 𝐌𝐨𝐛𝐢𝐥𝐞 𝐀𝐋𝐎𝐇𝐀🏄 -- Learning! With 50 demos, our robot can autonomously complete complex mobile manipulation tasks: - cook and serve shrimp🦐 - call and take elevator🛗 - store a 3Ibs pot to a two-door cabinet Open-sourced! Co-led @tonyzzhao , @chelseabfinn
188
891
4K
11
24
166
@zipengfu
Zipeng Fu
8 months
Very glad to host @UnitreeRobotics at @StanfordAILab today! RL controllers are now the default choice of the industry.
2
10
144
@zipengfu
Zipeng Fu
2 years
For so long, legged manipulators are expensive and use *modular* control pipelines requiring immense engineering by large teams. In #CoRL2022 (Oral) we present Deep Whole-Body Control, achieving dynamic behaviors on our custom low-cost hardware. 🧵
1
21
121
@zipengfu
Zipeng Fu
7 months
@UnitreeRobotics B2 has the best quadruped controller that I've ever seen, combining the robustness of RL controllers and naturalness of model-based controllers in a perfect way. Just incredible...
1
20
122
@zipengfu
Zipeng Fu
5 months
Co-training (1) improves the performance across all tasks, (2) is compatible with ACT, Diffusion Policy and VINN, (3) is robust to different data mixtures.
Tweet media one
1
15
111
@zipengfu
Zipeng Fu
8 months
Trained fully in sim, our parkour policy has emergent re-trying behaviors, allowing the robot to attempt overcoming an obstacle multiple times if it initially fails. The robot learns to push against the obstacle, ensuring adequate run-up space for subsequent attempts.
3
12
88
@zipengfu
Zipeng Fu
2 years
Thanks @_akhaliq for sharing our latest work on Deep Whole-Body Control! A detailed explanation thread is coming soon.
@_akhaliq
AK
2 years
Deep Whole-Body Control: Learning a Unified Policy for Manipulation and Locomotion abs: project page:
3
58
299
2
5
52
@zipengfu
Zipeng Fu
7 months
Results from the Japanese robotics silo just keep impressing me again and again ...
@WatakoLab
わたこ
7 months
やっと、やっとここまできた…。 もう少し、あと少し…。
60
2K
8K
0
5
47
@zipengfu
Zipeng Fu
1 year
Check out our #CoRL2022 Oral Talk and live demo of "Deep Whole-Body Control" given by @pathak2206 (w/ @xuxin_cheng ). Super excited it made into the list of 3 finalists for the Best Systems Paper Award! Project website: Oral Talk:
2
4
45
@zipengfu
Zipeng Fu
8 months
Our parkour policy can be deployed to low-cost robots (e.g. A1, Go1) using only onboard compute (Nvidia Jetson), one onboard depth camera (Intel Realsense) and onboard power. No motion capture, LiDAR, multiple depth cams, heavy compute are used.
Tweet media one
1
3
44
@zipengfu
Zipeng Fu
8 months
This project is led by @ziwenzhuang_leo and me. Advised by @chelseabfinn and @zhaohang0124 .(Please consider follow @ziwenzhuang_leo for more cool robot demos in future!) Project website: Code: w/ @wang_jianren , Chris and Sören
Tweet media one
7
5
40
@zipengfu
Zipeng Fu
8 months
How do we train our parkour policy? Stage 1: RL pre-training with soft dynamics constraints. We allow robots to penetrate obstacles using an auto curriculum that encourages robots to gradually learn to overcome obstacles while minimizing penetrations.
1
1
33
@zipengfu
Zipeng Fu
4 months
biped locomotion is solved by @ZhongyuLi4 🙂
@ZhongyuLi4
Zhongyu Li
4 months
Interested in making your bipedal robots to be athletes? We summarized our RL work to create robust & adaptive controllers for general bipedal skills. 400m-dash, running over terrains/against perturbations, targeted jumping, compliant walking, not a problem for bipeds now.🧵👇
15
90
441
1
3
31
@zipengfu
Zipeng Fu
8 months
Stage 3: Distillation. After each individual parkour skill is learned, we use DAgger to distill them into a single vision-based parkour policy (parametrized by a RNN) that has memory and can be deployed to a legged robot using only onboard perception and compute.
Tweet media one
1
2
29
@zipengfu
Zipeng Fu
2 months
at this moment, the robot dog knows it's actually a humanoid in disguise Evolution!!
Tweet media one
@leggedrobotics
Robotic Systems Lab
2 months
🔥Exciting news 🤖 Our latest research by @HoellerDavid , @rdn_nikita , @2nisi in @SciRobotics unlocks new achievements:  Unprecedented agility in quadrupedal robots, mastering locomotion, navigation, and perception through deep reinforcement learning! @NVIDIARobotics
22
124
608
3
0
28
@zipengfu
Zipeng Fu
8 months
Stage 2: RL fine-tuning with hard dynamics constraints. We enforce all dynamics constraints and fine-tune the behaviors learned in the pre-training stage with realistic dynamics.
1
1
26
@zipengfu
Zipeng Fu
6 months
Tweet media one
@ziwenzhuang_leo
Ziwen Zhuang
6 months
CoRL 2023 completed! Demo succeeded! Finalist achieved! 🎊🎉🍾
1
13
96
1
2
26
@zipengfu
Zipeng Fu
5 months
we haven't shown all of our results. coming soon!
@nejoom
nejoom
5 months
1
0
5
6
0
24
@zipengfu
Zipeng Fu
2 months
who doesn't want a bag of trail mix? congrats @lucy_x_shi
@lucy_x_shi
Lucy Shi
2 months
Introducing Yell At Your Robot (YAY Robot!) 🗣️- a fun collaboration b/w @Stanford and @UCBerkeley 🤖 We enable robots to improve on-the-fly from language corrections: robots rapidly adapt in real-time and continuously improve from human verbal feedback. YAY Robot enables
17
79
461
1
2
18
@zipengfu
Zipeng Fu
2 months
Imagine combine this pipeline with Vision Pros. Maybe Apple has a huge edge in in-the-wild robot data collection? Congrats @chenwang_j and the team!
@chenwang_j
Chen Wang
2 months
Can we use wearable devices to collect robot data without actual robots? Yes! With a pair of gloves🧤! Introducing DexCap, a portable hand motion capture system that collects 3D data (point cloud + finger motion) for training robots with dexterous hands Everything open-sourced
21
131
624
1
2
16
@zipengfu
Zipeng Fu
1 year
Huge thanks to @anag004 @shikharbahl @ashishkr9311 helping live demos. Video recording by @xuxin_cheng . Photo credit to @breadli428
Tweet media one
1
2
16
@zipengfu
Zipeng Fu
2 years
Join us at #CVPR2022 for robot demos
@pathak2206
Deepak Pathak
2 years
Attending first in-person conf since the pandemic at #CVPR2022 . We gave live demos of our robots during my talk at Open-World Vision workshop. The convention center mostly had dull flat ground, so we had to find scraps and be creative with them to build "difficult" terrains! 😅
2
17
215
0
0
15
@zipengfu
Zipeng Fu
2 years
Excited to share our work on agile locomotion! Inspired by biomechanics, elegant gaits emerge from minimizing energy consumption without any motion or imitation priors.
@pathak2206
Deepak Pathak
2 years
Excited to report our progress on agile locomotion! In CoRL'21 paper, we simplify RMA rewards with just an energy term motivated by biomechanics. Optimal gaits *emerge* across speeds w/o *any* priors like high-speed galloping with emergent flight phase!!
4
35
246
0
0
12
@zipengfu
Zipeng Fu
5 months
@scott_e_reed @tonyzzhao @chelseabfinn Thanks Scott! The compute and NN models maybe were not ready 10 years ago. PR1 has also shown amazing teleop results a way back.
1
0
6
@zipengfu
Zipeng Fu
5 months
Tweet media one
1
0
8
@zipengfu
Zipeng Fu
8 months
@lordnarfz0g we test our robot system on A1 and Go1 robots from the amazing @UnitreeRobotics
1
0
7
@zipengfu
Zipeng Fu
8 months
@m2t016 love their work. The main difference is that our project uses vision for high-frequency agile locomotion in an end-to-end fashion, not just skill selection.
1
1
7
@zipengfu
Zipeng Fu
7 months
@ziwenzhuang_leo
Ziwen Zhuang
7 months
We released our checkpoints for Go1. Now you can download and try the policy on your Go1! Stay tuned on our in-person CoRL demo🥳
Tweet media one
0
3
18
2
2
6
@zipengfu
Zipeng Fu
8 months
@Scobleizer @nfkmobile around $8k including all the accessories like the depth cam and the Jetson compute board
1
1
6
@zipengfu
Zipeng Fu
8 months
Thanks Jim for the spotlight and great summary!! More details at
@DrJimFan
Jim Fan
8 months
This is Open Robot Parkour: the agility at Boston Dynamics level, but now open-source for everyone.🐕 Two NVIDIA tech here at play: 1. IsaacGym: train the robot dog by speeding up reality 10,000x with massively parallel simulation on GPU. 2. NVIDIA Jetson NX for onboard compute.
43
295
1K
0
0
5
@zipengfu
Zipeng Fu
8 months
@pathak2206 Thanks a lot Deepak! Your tweet means a lot! Learned a lot from you!
0
0
5
@zipengfu
Zipeng Fu
3 months
@chichengcc @YouTube wow you're becoming a professional youtuber lol
1
0
5
@zipengfu
Zipeng Fu
2 years
Amazing home robot results from @shikharbahl !
@pathak2206
Deepak Pathak
2 years
How can we enable robots to perform diverse tasks? Designing rewards or demos for each task is not scalable. We propose WHIRL which learns by watching a single human video followed by autonomous exploration *directly* in the real world (no simulation)!
14
152
754
0
1
4
@zipengfu
Zipeng Fu
2 years
Chechout our #CVPR2022 paper at poster 124a now!
@pathak2206
Deepak Pathak
2 years
Today at #CVPR22 , we present our paper on visual navigation with legged robots in the morning. Our robot learns to walk (low-level) and plans (low-level) the path to a goal avoiding dynamic as well as "invisible" obstacles thanks to the coupling of vision with proprioception.
2
20
205
0
0
4
@zipengfu
Zipeng Fu
5 months
@allenzren @tonyzzhao @chelseabfinn Thanks Allen! We found two parallel-jaw gripper are powerful, but maybe two dex hands are even better!
0
0
4
@zipengfu
Zipeng Fu
5 months
1
0
3
@zipengfu
Zipeng Fu
8 months
@thetjadams Yes, you can modify the training pipeline for another quadrupedal robot by re-training with the URDF of that robot.
0
0
3
@zipengfu
Zipeng Fu
2 years
This work is done jointly with @whshkan and @pathak2206 A detailed explanation thread can be found here: 2/
@pathak2206
Deepak Pathak
2 years
An arm can increase the utility of legged robots. But due to high dimensionality, most prior methods decouple learning for legs & arm. In #CoRL '22 (Oral), we present an end-to-end approach for *whole-body control* to get dynamic behaviors on the robot.
7
47
298
1
0
3
@zipengfu
Zipeng Fu
5 months
@pathak2206 @tonyzzhao @chelseabfinn Turns out people without prior exp with mobile aloha can approach the teleop speed of @tonyzzhao and I after 5 trials of practice.
Tweet media one
0
0
3
@zipengfu
Zipeng Fu
1 month
0
0
3
@zipengfu
Zipeng Fu
2 years
In #CVPR2022 , we present multi-modal navigation for legged robots. Our robot learns to walk (low-level) and plan (high-level) to goals avoiding dynamic & "invisible" obstacles thanks to the coupling of vision & proprioception. w/ @anag004 @HaozhiQ 4/
1
1
2
@zipengfu
Zipeng Fu
8 months
@viktor_m81 @UnitreeRobotics Thanks Viktor! In addition to Isaac Gym, it also uses a Nvidia Jetson board for onboard network inference!
0
0
2
@zipengfu
Zipeng Fu
9 years
Tweet media one
Tweet media two
0
0
2
@zipengfu
Zipeng Fu
2 years
@CVPR For camera-ready, does at least one author need to be registered before camera-ready deadline?
1
0
2
@zipengfu
Zipeng Fu
2 years
Apparently, my co-author & friend Xuxin Cheng changed his twitter handle 😛. He can be found here: @xuxin_cheng
0
0
2
@zipengfu
Zipeng Fu
5 months
0
0
2
@zipengfu
Zipeng Fu
1 month
0
0
2
@zipengfu
Zipeng Fu
5 months
@chichengcc @tonyzzhao Thanks Cheng for your and your lab's support from the beginning!
Tweet media one
0
0
2
@zipengfu
Zipeng Fu
5 months
0
0
2
@zipengfu
Zipeng Fu
5 months
@s_kajita Thanks Shuuji!!
0
0
1
@zipengfu
Zipeng Fu
5 months
@pabbeel Thank you Pieter!!
0
0
1
@zipengfu
Zipeng Fu
2 years
@pathak2206 @StanfordAILab @ashishkr9311 @CarnegieMellon Thanks Deepak!! I learned a lot from you! Super fortunate to be advised by you!
0
0
1
@zipengfu
Zipeng Fu
5 months
@Ransalu @charlesjavelona @tonyzzhao @chelseabfinn @StanfordAILab Thanks for your interest! We will release all the details shortly!
0
0
1
@zipengfu
Zipeng Fu
7 months
@ChongZitaZhang @UnitreeRobotics I’d also like to know the answers to these questions lol
0
0
1
@zipengfu
Zipeng Fu
8 months
0
0
1
@zipengfu
Zipeng Fu
2 years
This line of work starts with our #RSS2021 paper on Rapid Motor Adaptation (RMA). RMA allows a legged robot trained *fully* in simulation to *adapt* online to diverse real-world terrains in real-time! 6/6
1
0
1
@zipengfu
Zipeng Fu
8 months
0
0
1
@zipengfu
Zipeng Fu
5 months
@yjy0625 @tonyzzhao Thanks Jingyun!
0
0
1
@zipengfu
Zipeng Fu
8 months
@xf1280 Thanks Fei!
0
0
1
@zipengfu
Zipeng Fu
8 months
@class_OpenGL The robot sees at 00:08. A single frame is enough
1
0
1
@zipengfu
Zipeng Fu
17 days
@kenziyuliu 🚀🚀🚀
0
0
1