
Peide Huang
@peide_huang
Followers
486
Following
923
Media
14
Statuses
63
Research scientist at Apple. Ph.D. @CarnegieMellon, M.S. @Stanford, B.E. @NTUsg. All opinions are my own.
Joined January 2021
π¨Introducing EgoDex, the largest ego-centric video dataset to-date that focuses on human dexterous manipulation, with structured annotations including 3D upper-body and hand trackingπ€², camera poseπ·, and language annotationπ¬. Kudos to the team and looking forward to what the.
Imitation learning has a data scarcity problem. Introducing EgoDex from Apple, the largest and most diverse dataset of dexterous human manipulation to date β 829 hours of egocentric video + paired 3D hand poses across 194 tasks. Now on arxiv: (1/4)
2
7
47
Great post from SL as always! One question on the surrogate vs. real data boundary: if human video or hand-gripper data is "surrogate," what about real robot data from robots with different morphologies/controllers than the target? Is that still "real," or does it become.
I wrote a fun little article about all the ways to dodge the need for real-world robot data. I think it has a cute title.
0
0
20
From my understanding, ROSSETA is basically RL from human+AI feedback. Aligning robot with human preference will be a crucial piece of everyday robot. Great work!.
Iβve always been thinking about how to make robots naturally co-exist with humans. The first step is having robots understand our unconstrained, dynamic preferences and follow them. π€. We proposed ROSETTA, which translates free-form language instructions into reward functions to.
0
2
6
RT @SnehalJauhri: Thank you to all the speakers & attendees for making the EgoAct workshop a great success!. Congratulations to the winnersβ¦.
0
8
0
Very happy that EgoDex received Best Paper Awards of 1st EgoAct workshop at #RSS2025! Huge thanks to the organizing committee @SnehalJauhri @GeorgiaChal @GalassoFab10 @danfei_xu @YuXiang_IRVL for putting out this forward-looking workshop. Also kudos to my colleagues @ryan_hoque
7
7
69
In RoboTool (, we showed that LLM provides robots with important prior knowledge about how to use tools. This time, @JunyaoShi and his colleagues showed that VLM not only provides guidance about how to use tools, but also how to design them. Check out this.
arxiv.org
Tool use is a hallmark of advanced intelligence, exemplified in both animal behavior and robotic capabilities. This paper investigates the feasibility of imbuing robots with the ability to...
π‘Can robots autonomously design their own tools and figure out how to use them?. We present VLMgineer π οΈ, a framework that leverages Vision Language Models with Evolutionary Search to automatically generate and refine physical tool designs alongside corresponding robot action
2
6
27
Check out this extremely cool EgoDex data visualization tool created by @pablovelagomez1! Thanks for the great job!.
Continued working on the ego-dex dataset, I ported the entire test set to @rerundotio and created a @Gradio app to view it! Links below VVV. This allows for a straightforward way to explore each episode of the (test) dataset and better understand how the hand-tracking and slam
1
0
7
Human egocentric video is the true passively-scalable data source! Great work from NYU and Berkeley!.
Imagine robots learning new skillsβwithout any robot data. Today, we're excited to release EgoZero: our first steps in training robot policies that operate in unseen environments, solely from data collected through humans wearing Aria smart glasses. π§΅π
0
1
8
Love to see the convergence of animation and whole-body control!.
TokenHSI: Unified Synthesis of Physical Human-Scene Interactions through Task Tokenization. π¦ππ€Έ ππ§π’πππ ππ¨π§ππ«π¨π₯ ππ¨π₯π’ππ² & ππππ’ππ’ππ§π ππ¨π₯π’ππ² ππππ©ππππ’π¨π§ for Various Human-Scene Interaction (HSI) Tasks! #CVPR2025 . π Project Page:.
1
0
2
RT @frankzydou: TokenHSI: Unified Synthesis of Physical Human-Scene Interactions through Task Tokenization. π¦ππ€Έ ππ§π’πππ ππ¨π§ππ«π¨π₯ ππ¨π₯π’ππ² & ππβ¦.
0
20
0
π¨Check out our new RAL paper about on-the-fly system ID for adaptive Sim2Real transferπ¦Ύ Also, first author @XilunZhang1999 is applying for PhD this year. If you have or know any openings, please DM this young brilliant student!.
π€ What if robots could adapt from simulation to reality on the fly, mastering tasks like scooping objects and playing table air hockey? π₯π. Iβm thrilled to share that our work, "Dynamics as Prompts: In-Context Learning for Sim-to-Real System Identification," has been accepted
0
0
10
π¨Ever worried that your collected data cannot be used for training robot policies? You may need a Vision Pro. π₯Check out this new AR-enabled, in-the-wild data collection method from our team here at Apple! Kudos to @ryan_hoque and everyone in the team!π.
π¨ New research from my team at Apple - real-time augmented reality robot feedback with just your hands + Vision Pro! . Paper: Short thread below -
0
3
13
RT @talking_kim: Can we make wearable sensors for humanoid robots and augment their perception?. We are introducing ARMOR, a novel egocentrβ¦.
0
16
0