Jianfei_AI Profile Banner
Jianfei Yang Profile
Jianfei Yang

@Jianfei_AI

Followers
1K
Following
334
Media
28
Statuses
120

Assistant Professor @NTUsg Prev Researcher @Harvard @UCBerkeley @UTokyo_News

Singapore
Joined February 2024
Don't wanna be here? Send us removal request.
@Jianfei_AI
Jianfei Yang
5 days
Imitation learning of massive skills is more robust than current VLAs. The problem is how to scale up. Thanks for the interesting paper and discussion @pitvit_.
@pitvit_
Pietro Vitiello
5 days
Thanks for having us @reborn_agi! It's always great fun to talk about robotics, especially with @Jianfei_AI. Check this out if you want to hear about scaling imitation learning and large action models. We're eager to hear people's different opinions!.
0
0
6
@Jianfei_AI
Jianfei Yang
11 days
When the most useful function of ChatGPT @OpenAI is down.
Tweet media one
2
0
5
@grok
Grok
4 days
Join millions who have switched to Grok.
183
388
3K
@Jianfei_AI
Jianfei Yang
11 days
🚀 PhD Opportunities in Embodied AI @ MARS Lab (NTU, Singapore) 🚀. The Multimodal AI and Robotic Systems (MARS) Lab at MAE, Nanyang Technological University (NTU), led by me, is recruiting PhD students in Embodied AI. 🔬 What we offer:.1. Access to state-of-the-art robots,
Tweet media one
6
27
241
@Jianfei_AI
Jianfei Yang
12 days
RT @reborn_agi: Reborn Demo Series: teach robots to walk in only 3 HOURS with 1 GPU!. Watch our Booster T1 robot learn to walk by a super-e….
0
32
0
@Jianfei_AI
Jianfei Yang
13 days
RT @reborn_agi: 🚀 Announcing the Reborn x Roboverse x Unitree Global Hackathon 🚀. The ultimate robotics + simulation challenge is here! Fro….
0
41
0
@Jianfei_AI
Jianfei Yang
26 days
Finally, OpenAI instead of CloseAI. The 20b model will be super useful for on-device robot task planning.
@reborn_agi
Reborn
26 days
1
0
4
@Jianfei_AI
Jianfei Yang
27 days
Cloth manipulation has long been one of the hardest challenges in robotics—especially for long-horizon tasks due to the complex, high-dimensional dynamics of deformable materials. This work tackles it beautifully with a transformer-based diffusion model, achieving strong.
@Haoyang1i
Haoyang Li
29 days
Modeling the complex 3D dynamics of deformable objects is a critical challenge in robotics. How can we achieve accurate long-horizon prediction for tasks like cloth manipulation 👕?. Thrilled to announce that our work, "UniClothDiff," has been accepted to #CoRL2025! We leverage a
0
2
18
@Jianfei_AI
Jianfei Yang
28 days
RT @YiMaTweets: Visited Kennedy space center in Florida today. Very inspiring: “We choose to do it, not because it is easy, but because it….
0
4
0
@Jianfei_AI
Jianfei Yang
30 days
These look like robot designs made by middle school students — and they’re already impressive. Just imagine how exciting the future of competitions like RoboCup, Robocon, and Robomaster will be!.
@reborn_agi
Reborn
30 days
Must robots look like us?. From underwater snakes to flying drones and spider bots, robotic forms are evolving far beyond humanoids. These remind me of Transformers, each of which comes with its own advantage—speed, agility, adaptability. Maybe the future isn’t human-shaped
0
0
3
@Jianfei_AI
Jianfei Yang
1 month
Robot Mind Episode #8: K-bot.Talk to the first US-made open-source humanoid robot company, K-Scale. Their robot is more flexible and even open to new mechanical replacement of arms and legs, providing more scenarios. Thanks, Rui, the COO of K-Scale, for joining us.
@reborn_agi
Reborn
1 month
🎙️ Full episode of Robot Mind (ep#8) is alive!.We sit down with Rui, the COO of @kscalelabs, to unpack K-Bot — the first open-source, affordable, American-made humanoid robot 🇺🇸🤖. Topics include:.– Why K-Scale is betting on open-source embodied intelligence .– U.S. vs China:
2
0
12
@Jianfei_AI
Jianfei Yang
1 month
Tasks that seem "simple" to humans—like cooking, cleaning, or feeding a pet—often require richer control models, complex scene understanding, and high-level decision-making. These are built on top of the intuitive common sense and multimodal perception that our brains evolved.
@DrJimFan
Jim Fan
1 month
I'm observing a mini Moravec's paradox within robotics: gymnastics that are difficult for humans are much easier for robots than "unsexy" tasks like cooking, cleaning, and assembling. It leads to a cognitive dissonance for people outside the field, "so, robots can parkour &
2
0
6
@Jianfei_AI
Jianfei Yang
1 month
RT @reborn_agi: 🚨 New episode of Robot Mind.We are excited to be joined by Rui Xu, COO of Kscale @kscalelabs, about the story and motivatio….
0
4
0
@Jianfei_AI
Jianfei Yang
1 month
Exactly. Always feel that I understand my research deeper after I draft it… This also aligns the claim “the best way to understand sth is to teach sth”.
@DKThomp
Derek Thompson
1 month
Yes. Writing is not a second thing that happens after thinking. The act of writing is an act of thinking. Writing *is* thinking. Students, academics, and anyone else who outsources their writing to LLMs will find their screens full of words and their minds emptied of thought.
Tweet media one
0
1
1
@Jianfei_AI
Jianfei Yang
1 month
Incredible! Every lab can now afford humanoid locomotion research thanks to Unitree. .
@reborn_agi
Reborn
1 month
🚀🔥 The Unitree R1 just landed — and it's an absolute monster. Built for locomotion supremacy, it packs 26 DOFs into just 25kg of pure speed and agility — all for ~$6K, nearly 10× cheaper than the G1 🤯. Looks? Straight out of a sci-fi film, just like a space explorer 🧑‍🚀. We’re
0
0
6
@Jianfei_AI
Jianfei Yang
1 month
Human-centric data curation is becoming more and more useful for robot learning. We should definitely explore more human data to fuel “robot imitation learning”.
@RchalYang
Ruihan Yang
1 month
How can we leverage diverse human videos to improve robot manipulation?. Excited to introduce EgoVLA — a Vision-Language-Action model trained on egocentric human videos by explicitly modeling wrist & hand motion. We build a shared action space between humans and robots, enabling
0
0
1
@Jianfei_AI
Jianfei Yang
1 month
A good use case!.
@reborn_agi
Reborn
1 month
A humanoid robot is directing the traffic 🤖🚦.Would you follow its instructions?
0
0
1
@Jianfei_AI
Jianfei Yang
1 month
Great takeaway: automatic reward engineering is the next wave of feature engineering :).
@ZhaoMandi
Mandi Zhao
1 month
One amusing takeaway from doing RL in these massively parallelized sim environments is that reward engineering matters more so than ever. A small detail in reward function could make a huge difference: with 10k+ parallel threads to explore in, the policy Will exploit any caveat
0
0
2