
Jianfei Yang
@Jianfei_AI
Followers
1K
Following
334
Media
28
Statuses
120
Assistant Professor @NTUsg Prev Researcher @Harvard @UCBerkeley @UTokyo_News
Singapore
Joined February 2024
Imitation learning of massive skills is more robust than current VLAs. The problem is how to scale up. Thanks for the interesting paper and discussion @pitvit_.
Thanks for having us @reborn_agi! It's always great fun to talk about robotics, especially with @Jianfei_AI. Check this out if you want to hear about scaling imitation learning and large action models. We're eager to hear people's different opinions!.
0
0
6
RT @reborn_agi: Reborn Demo Series: teach robots to walk in only 3 HOURS with 1 GPU!. Watch our Booster T1 robot learn to walk by a super-e….
0
32
0
RT @reborn_agi: 🚀 Announcing the Reborn x Roboverse x Unitree Global Hackathon 🚀. The ultimate robotics + simulation challenge is here! Fro….
0
41
0
Cloth manipulation has long been one of the hardest challenges in robotics—especially for long-horizon tasks due to the complex, high-dimensional dynamics of deformable materials. This work tackles it beautifully with a transformer-based diffusion model, achieving strong.
Modeling the complex 3D dynamics of deformable objects is a critical challenge in robotics. How can we achieve accurate long-horizon prediction for tasks like cloth manipulation 👕?. Thrilled to announce that our work, "UniClothDiff," has been accepted to #CoRL2025! We leverage a
0
2
18
RT @YiMaTweets: Visited Kennedy space center in Florida today. Very inspiring: “We choose to do it, not because it is easy, but because it….
0
4
0
These look like robot designs made by middle school students — and they’re already impressive. Just imagine how exciting the future of competitions like RoboCup, Robocon, and Robomaster will be!.
Must robots look like us?. From underwater snakes to flying drones and spider bots, robotic forms are evolving far beyond humanoids. These remind me of Transformers, each of which comes with its own advantage—speed, agility, adaptability. Maybe the future isn’t human-shaped
0
0
3
Robot Mind Episode #8: K-bot.Talk to the first US-made open-source humanoid robot company, K-Scale. Their robot is more flexible and even open to new mechanical replacement of arms and legs, providing more scenarios. Thanks, Rui, the COO of K-Scale, for joining us.
🎙️ Full episode of Robot Mind (ep#8) is alive!.We sit down with Rui, the COO of @kscalelabs, to unpack K-Bot — the first open-source, affordable, American-made humanoid robot 🇺🇸🤖. Topics include:.– Why K-Scale is betting on open-source embodied intelligence .– U.S. vs China:
2
0
12
Tasks that seem "simple" to humans—like cooking, cleaning, or feeding a pet—often require richer control models, complex scene understanding, and high-level decision-making. These are built on top of the intuitive common sense and multimodal perception that our brains evolved.
I'm observing a mini Moravec's paradox within robotics: gymnastics that are difficult for humans are much easier for robots than "unsexy" tasks like cooking, cleaning, and assembling. It leads to a cognitive dissonance for people outside the field, "so, robots can parkour &
2
0
6
RT @reborn_agi: 🚨 New episode of Robot Mind.We are excited to be joined by Rui Xu, COO of Kscale @kscalelabs, about the story and motivatio….
0
4
0
Exactly. Always feel that I understand my research deeper after I draft it… This also aligns the claim “the best way to understand sth is to teach sth”.
Yes. Writing is not a second thing that happens after thinking. The act of writing is an act of thinking. Writing *is* thinking. Students, academics, and anyone else who outsources their writing to LLMs will find their screens full of words and their minds emptied of thought.
0
1
1
Incredible! Every lab can now afford humanoid locomotion research thanks to Unitree. .
🚀🔥 The Unitree R1 just landed — and it's an absolute monster. Built for locomotion supremacy, it packs 26 DOFs into just 25kg of pure speed and agility — all for ~$6K, nearly 10× cheaper than the G1 🤯. Looks? Straight out of a sci-fi film, just like a space explorer 🧑‍🚀. We’re
0
0
6
Human-centric data curation is becoming more and more useful for robot learning. We should definitely explore more human data to fuel “robot imitation learning”.
How can we leverage diverse human videos to improve robot manipulation?. Excited to introduce EgoVLA — a Vision-Language-Action model trained on egocentric human videos by explicitly modeling wrist & hand motion. We build a shared action space between humans and robots, enabling
0
0
1
Great takeaway: automatic reward engineering is the next wave of feature engineering :).
One amusing takeaway from doing RL in these massively parallelized sim environments is that reward engineering matters more so than ever. A small detail in reward function could make a huge difference: with 10k+ parallel threads to explore in, the policy Will exploit any caveat
0
0
2