peide_huang Profile Banner
Peide Huang Profile
Peide Huang

@peide_huang

Followers
486
Following
923
Media
14
Statuses
63

Research scientist at Apple. Ph.D. @CarnegieMellon, M.S. @Stanford, B.E. @NTUsg. All opinions are my own.

Joined January 2021
Don't wanna be here? Send us removal request.
@peide_huang
Peide Huang
2 months
🚨Introducing EgoDex, the largest ego-centric video dataset to-date that focuses on human dexterous manipulation, with structured annotations including 3D upper-body and hand tracking🀲, camera poseπŸ“·, and language annotationπŸ’¬. Kudos to the team and looking forward to what the.
@ryan_hoque
Ryan Hoque
2 months
Imitation learning has a data scarcity problem. Introducing EgoDex from Apple, the largest and most diverse dataset of dexterous human manipulation to date β€” 829 hours of egocentric video + paired 3D hand poses across 194 tasks. Now on arxiv: (1/4)
2
7
47
@peide_huang
Peide Huang
7 days
Great post from SL as always! One question on the surrogate vs. real data boundary: if human video or hand-gripper data is "surrogate," what about real robot data from robots with different morphologies/controllers than the target? Is that still "real," or does it become.
@svlevine
Sergey Levine
9 days
I wrote a fun little article about all the ways to dodge the need for real-world robot data. I think it has a cute title.
0
0
20
@peide_huang
Peide Huang
1 month
From my understanding, ROSSETA is basically RL from human+AI feedback. Aligning robot with human preference will be a crucial piece of everyday robot. Great work!.
@mengdixu_
Mengdi Xu
1 month
I’ve always been thinking about how to make robots naturally co-exist with humans. The first step is having robots understand our unconstrained, dynamic preferences and follow them. 🀝. We proposed ROSETTA, which translates free-form language instructions into reward functions to.
0
2
6
@peide_huang
Peide Huang
1 month
RT @SnehalJauhri: Thank you to all the speakers & attendees for making the EgoAct workshop a great success!. Congratulations to the winners….
0
8
0
@peide_huang
Peide Huang
1 month
Very happy that EgoDex received Best Paper Awards of 1st EgoAct workshop at #RSS2025! Huge thanks to the organizing committee @SnehalJauhri @GeorgiaChal @GalassoFab10 @danfei_xu @YuXiang_IRVL for putting out this forward-looking workshop. Also kudos to my colleagues @ryan_hoque
Tweet media one
7
7
69
@peide_huang
Peide Huang
1 month
In RoboTool (, we showed that LLM provides robots with important prior knowledge about how to use tools. This time, @JunyaoShi and his colleagues showed that VLM not only provides guidance about how to use tools, but also how to design them. Check out this.
Tweet card summary image
arxiv.org
Tool use is a hallmark of advanced intelligence, exemplified in both animal behavior and robotic capabilities. This paper investigates the feasibility of imbuing robots with the ability to...
@JunyaoShi
Junyao Shi
1 month
πŸ’‘Can robots autonomously design their own tools and figure out how to use them?. We present VLMgineer πŸ› οΈ, a framework that leverages Vision Language Models with Evolutionary Search to automatically generate and refine physical tool designs alongside corresponding robot action
2
6
27
@peide_huang
Peide Huang
2 months
Check out this extremely cool EgoDex data visualization tool created by @pablovelagomez1! Thanks for the great job!.
@pablovelagomez1
Pablo Vela
2 months
Continued working on the ego-dex dataset, I ported the entire test set to @rerundotio and created a @Gradio app to view it! Links below VVV. This allows for a straightforward way to explore each episode of the (test) dataset and better understand how the hand-tracking and slam
1
0
7
@peide_huang
Peide Huang
2 months
Human egocentric video is the true passively-scalable data source! Great work from NYU and Berkeley!.
@LerrelPinto
Lerrel Pinto
2 months
Imagine robots learning new skillsβ€”without any robot data. Today, we're excited to release EgoZero: our first steps in training robot policies that operate in unseen environments, solely from data collected through humans wearing Aria smart glasses. πŸ§΅πŸ‘‡
0
1
8
@peide_huang
Peide Huang
4 months
Love to see the convergence of animation and whole-body control!.
@frankzydou
Zhiyang (Frank) Dou
4 months
TokenHSI: Unified Synthesis of Physical Human-Scene Interactions through Task Tokenization. πŸ“¦πŸƒπŸ€Έ π”π§π’π­πžπ 𝐂𝐨𝐧𝐭𝐫𝐨π₯ 𝐏𝐨π₯𝐒𝐜𝐲 & π„πŸπŸπ’πœπ’πžπ§π­ 𝐏𝐨π₯𝐒𝐜𝐲 π€ππšπ©π­πšπ­π’π¨π§ for Various Human-Scene Interaction (HSI) Tasks! #CVPR2025 . 🏠 Project Page:.
1
0
2
@peide_huang
Peide Huang
4 months
RT @frankzydou: TokenHSI: Unified Synthesis of Physical Human-Scene Interactions through Task Tokenization. πŸ“¦πŸƒπŸ€Έ π”π§π’π­πžπ 𝐂𝐨𝐧𝐭𝐫𝐨π₯ 𝐏𝐨π₯𝐒𝐜𝐲 & π„πŸβ€¦.
0
20
0
@peide_huang
Peide Huang
6 months
πŸš€ New Research on Human-Robot Interaction! πŸ€–. How can humanoid robots communicate beyond words? Our framework, EMOTION, leverages Large Language Models (LLMs) to dynamically generate expressive gestures, enhancing non-verbal communication in robots. 🀯 Our experiments show
15
90
402
@peide_huang
Peide Huang
6 months
🚨Check out our new RAL paper about on-the-fly system ID for adaptive Sim2Real transfer🦾 Also, first author @XilunZhang1999 is applying for PhD this year. If you have or know any openings, please DM this young brilliant student!.
@XilunZhang1999
XilunZhang
6 months
πŸ€– What if robots could adapt from simulation to reality on the fly, mastering tasks like scooping objects and playing table air hockey? πŸ₯„πŸ“. I’m thrilled to share that our work, "Dynamics as Prompts: In-Context Learning for Sim-to-Real System Identification," has been accepted
0
0
10
@peide_huang
Peide Huang
7 months
🚨Ever worried that your collected data cannot be used for training robot policies? You may need a Vision Pro. πŸ”₯Check out this new AR-enabled, in-the-wild data collection method from our team here at Apple! Kudos to @ryan_hoque and everyone in the team!🎊.
@ryan_hoque
Ryan Hoque
7 months
🚨 New research from my team at Apple - real-time augmented reality robot feedback with just your hands + Vision Pro! . Paper: Short thread below -
0
3
13
@peide_huang
Peide Huang
8 months
RT @talking_kim: Can we make wearable sensors for humanoid robots and augment their perception?. We are introducing ARMOR, a novel egocentr….
0
16
0