
Yanjie Ze
@ZeYanjie
Followers
3K
Following
6K
Media
51
Statuses
661
CS PhD @Stanford. Focus on Humanoid Robot Learning.
Joined December 2020
RT @mastertoadster: Time for another blog post -- "The Inner Robot" -- on how our subconscious processes mislead us on the nature of human….
0
5
0
RT @martinpeticco: What’s keeping robot arms from working like human arms?. They're big, slow, have the wrong joints, and can't conform to….
0
51
0
RT @YutongBAI1002: What would a World Model look like if we start from a real embodied agent acting in the real world?. It has to have: 1)….
0
117
0
RT @YuXiang_IRVL: Details about how these motions are achieved. RL with motion reference trajectories #RSS2025
0
8
0
RT @RoboPapers: Ep#16 with @ZeYanjie on TWIST: Teleoperated Whole-Body Imitation System . Co-hosted by @chris_j_p….
0
3
0
RT @chris_j_paxton: We had a great discussion with @ZeYanjie about his new paper TWIST! Whole body teleoperation. And I do mean whole body:….
0
12
0
RT @RoboPapers: Full episode dropping soon!. Geeking out with @ZeYanjie on TWIST: Teleoperated Whole-Body Imitation System .
0
5
0
RT @charles_rqi: Tesla Robotaxi: A New Era Begins. I’ve (very fortunately) been part of multiple robotaxi launches. But this one is differe….
0
449
0
RT @jianglong_ye: How to generate billion-scale manipulation demonstrations easily? Let us leverage generative models! 🤖✨. We introduce Dex….
0
84
0
RT @leggedrobotics: Best Systems Paper finalist at #RSS2025 🚀. Excited to share our work on a perceptive forward dynamics model for safe, p….
0
37
0
RT @LerrelPinto: We have developed a new tactile sensor, called e-Flesh, with a simple working principle: measure deformations in 3D printa….
0
748
0
RT @YXWangBot: 🤖 Does VLA models really listen to language instructions? Maybe not 👀.🚀 Introducing our RSS paper: CodeDiffuser -- using VLM….
0
25
0
I've seen live demos several times there; it is very interesting to see a robot actively search for objects and grasp a cup in occlusion. Congrats Haoyu @Haoyu_Xiong_!. Some key insights I learn.- DP + pretrained RGB works. No need to use a learn-from-scratch DP3 if you have.
Your bimanual manipulators might need a Robot Neck 🤖🦒. Introducing Vision in Action: Learning Active Perception from Human Demonstrations. ViA learns task-specific, active perceptual strategies—such as searching, tracking, and focusing—directly from human demos, enabling robust
2
3
23
RT @wang_jianren: (1/n) Since its publication in 2017, PPO has essentially become synonymous with RL. Today, we are excited to provide you….
0
80
0
Quite amazing. 1. These dexterous skills are extremely precise. 2. So far I believe besides imitation learning with massive high-quality demonstrations, no other ways can achieve such contact-rich skills. 3. Then I can not imagine how skillful these human teleopeators are.
Today we're excited to share a glimpse of what we're building at Generalist. As a first step towards our mission of making general-purpose robots a reality, we're pushing the frontiers of what end-to-end AI models can achieve in the real world. Here's a preview of our early
1
2
34