yili_ai Profile Banner
YI LI Profile
YI LI

@yili_ai

Followers
181
Following
47
Media
16
Statuses
38

🎓UW PhD | Ex-Nvidia, Microsoft Research Asia, Tsinghua

Joined June 2023
Don't wanna be here? Send us removal request.
@yili_ai
YI LI
5 months
🚀 Meet 🐹HAMSTER, our new hierarchical Vision-Language-Action (VLA) framework for robot manipulation!. 🔹 High-level VLM for perception & reasoning.🔹 Low-level 3D policy for precise control.🔹 Bridged by 2D paths for trajectory planning.HAMSTER learns from cost-effective
7
26
105
@yili_ai
YI LI
1 month
RT @ryan_hoque: Imitation learning has a data scarcity problem. Introducing EgoDex from Apple, the largest and most diverse dataset of de….
0
95
0
@yili_ai
YI LI
2 months
RT @jang_yoel: Introducing 𝐃𝐫𝐞𝐚𝐦𝐆𝐞𝐧!. We got humanoid robots to perform totally new 𝑣𝑒𝑟𝑏𝑠 in new environments through video world models.….
0
74
0
@yili_ai
YI LI
2 months
RT @JunyaoShi: On my way to Atlanta to present ZeroMimic: Distilling Robotic Manipulation Skills from Web Videos at @ieee_ras_icra! Stay tu….
0
4
0
@yili_ai
YI LI
2 months
RT @Tesla_Optimus: Was just getting warmed up
0
10K
0
@yili_ai
YI LI
2 months
RT @NVIDIARobotics: 🎊 That's a wrap on #ICLR2025. Shout out to all the amazing research in #robotics, machine vision, and more. Missed it….
0
20
0
@yili_ai
YI LI
2 months
RT @RemiCadene: Meet SO-101, next-gen robot arm for all, by @huggingface 🤗. Enables smooth takeover to boost AI capabilities, faster assemb….
0
139
0
@yili_ai
YI LI
4 months
RT @yukez: Thrilled to announce GR00T N1, our open foundation model for generalist humanoid robots!. GR00T N1 adopts a dual-system design,….
0
58
0
@yili_ai
YI LI
4 months
RT @QinYuzhe: Meet our first general-purpose robot at @DexmateAI . Adjustable height from 0.66m to 2.2m: compact en….
0
33
0
@yili_ai
YI LI
4 months
RT @JasonQSY: Exciting News! Our new paper "3D-MVP" is out! We propose a novel approach for 3D multi-view pretraining using masked autoenco….
0
26
0
@yili_ai
YI LI
4 months
It is exciting to see another Hierarchical VLA models in the same month! 🎉.Hi Robot from @physical_int uses language, Helix from @Figure uses latent embedding, our HAMSTER from @NVIDIA uses 2d paths. We’re all inspired by Thinking, Fast and Slow—combining fast, intuitive (System
Tweet media one
@physical_int
Physical Intelligence
4 months
Vision-language models can control robots, but what if the prompt is too complex for the robot to follow directly?. We developed a way to get robots to “think through” complex instructions, feedback, and interjections. We call it the Hierarchical Interactive Robot (Hi Robot).
0
7
28
@yili_ai
YI LI
4 months
RT @chris_j_paxton: Its extremely clear to me that this sort of approach is the future, especially in relatively structured warehouse and l….
0
11
0
@yili_ai
YI LI
4 months
RT @chris_j_paxton: working reproduction of pi0! awesome stuff.
0
6
0
@yili_ai
YI LI
4 months
RT @RemiCadene: ⛔ STOP WHAT YOU'RE DOING ⛔. THERE IS A NEW ROBOT IN TOWN ~ LeKiwi 🥝 ~. build it yourself to automate daily choirs with….
0
62
0
@yili_ai
YI LI
4 months
RT @xiao_ted: There is so much potential in moving beyond simple natural language when building robot foundation models. Trajectories are a….
0
5
0
@yili_ai
YI LI
4 months
RT @abhishekunique7: Over the last few months, we’ve been thinking about how to learn from “off-domain” data - data from non-robot sources….
0
17
0
@yili_ai
YI LI
5 months
RT @deepseek_ai: 🚀 Day 0: Warming up for #OpenSourceWeek! . We're a tiny team @deepseek_ai exploring AGI. Starting next week, we'll be open….
0
3K
0
@yili_ai
YI LI
5 months
Check out our results in the screenshots from Figure AI's video! 📸. The prompt is displayed in the top-left corner, and the trajectory transitions from blue to red—with blue circles indicating gripper closure and red circles for opening. For more details, check out my other
Tweet media one
Tweet media two
Tweet media three
0
0
1