Hanjung Kim Profile
Hanjung Kim

@KimD0ing

Followers
95
Following
260
Media
6
Statuses
33

Visiting Scholar @nyuniversity | Ph.D. student @ Yonsei University

New York, NY
Joined February 2023
Don't wanna be here? Send us removal request.
@KimD0ing
Hanjung Kim
2 months
How can we effectively leverage human videos for robot learning by bridging the inherent embodiment gap?. We introduce UniSkill, a universal skill representation, a scalable method for learning cross-embodiment skill representations from large-scale in-the-wild video data. 1/n
2
30
174
@KimD0ing
Hanjung Kim
6 days
RT @sukjun_hwang: Tokenization has been the final barrier to truly end-to-end language models. We developed the H-Net: a hierarchical netw….
0
640
0
@KimD0ing
Hanjung Kim
6 days
RT @Jeongseok_hyun: 🎞️ 𝐃𝐨𝐮𝐛𝐥𝐞 𝐭𝐡𝐞 𝐒𝐩𝐞𝐞𝐝, 𝐙𝐞𝐫𝐨 𝐓𝐫𝐚𝐢𝐧𝐢𝐧𝐠: 𝐓𝐡𝐞 𝐅𝐫𝐞𝐞 𝐋𝐮𝐧𝐜𝐡 𝐟𝐨𝐫 𝐕𝐢𝐝𝐞𝐨 𝐋𝐋𝐌𝐬! ⚡️. 🚨I am excited to share that our paper is accepte….
0
6
0
@KimD0ing
Hanjung Kim
17 days
RT @Raunaqmb: Generalization needs data. But data collection is hard for precise tasks like plugging USBs, swiping cards, inserting plugs,….
0
28
0
@KimD0ing
Hanjung Kim
29 days
RT @Raunaqmb: Tactile sensing is gaining traction, but slowly. Why? Because integration remains difficult. But what if adding touch sensors….
0
106
0
@KimD0ing
Hanjung Kim
1 month
RT @LerrelPinto: It was nice engaging with the CV community on ways to stand out in the crowd. My answer was simple: work on robotics. Th….
0
25
0
@KimD0ing
Hanjung Kim
1 month
RT @notmahi: Live demo-ing RUMs at @CVPR this afternoon next to the expo sessions – stop by with something small and let’s see if the robot….
0
6
0
@KimD0ing
Hanjung Kim
1 month
RT @vincentjliu: We just open-sourced EgoZero!. It includes the full preprocessing to turn long-form recordings into individual demonstrati….
0
3
0
@KimD0ing
Hanjung Kim
1 month
RT @AdemiAdeniji: Everyday human data is robotics’ answer to internet-scale tokens. But how can robots learn to feel—just from videos?📹. I….
0
37
0
@KimD0ing
Hanjung Kim
1 month
RT @LerrelPinto: Teaching robots to learn only from RGB human videos is hard!. In Feel The Force (FTF), we teach robots to mimic the tactil….
0
86
0
@KimD0ing
Hanjung Kim
1 month
UniSkill is accepted at @CVPR Agents in Interactions, from Humans to Robots workshop. I'll be attending CVPR—would like to connect and chat with folks in the robotics. Feel free to ping me!.
@KimD0ing
Hanjung Kim
2 months
How can we effectively leverage human videos for robot learning by bridging the inherent embodiment gap?. We introduce UniSkill, a universal skill representation, a scalable method for learning cross-embodiment skill representations from large-scale in-the-wild video data. 1/n
0
2
32
@KimD0ing
Hanjung Kim
2 months
RT @vincentjliu: The future of robotics isn't in the lab – it's in your hands. Can we teach robots to act in the real world without a singl….
0
40
0
@KimD0ing
Hanjung Kim
2 months
RT @LerrelPinto: Imagine robots learning new skills—without any robot data. Today, we're excited to release EgoZero: our first steps in tr….
0
57
0
@KimD0ing
Hanjung Kim
2 months
RT @irmakkguzey: RUKA is warming up for our EXPO demo today @ICRA2025 with the help of our first-time teleoperators, @venkyp2000 and @LIUPE….
0
20
0
@KimD0ing
Hanjung Kim
2 months
RT @Tesla_Optimus: I’m not just dancing all day, ok
0
7K
0
@KimD0ing
Hanjung Kim
2 months
RT @notmahi: Morning, #ICRA2025 @ieee_ras_icra!. Bring something small 🍋🍑 and have our Robot Utility Model pick it up at our EXPO demo toda….
0
20
0
@KimD0ing
Hanjung Kim
2 months
RT @NaveenManwani17: 🚨Paper Alert 🚨. ➡️Paper Title: UniSkill: Imitating Human Videos via Cross-Embodiment Skill Representations. 🌟Few point….
0
1
0
@KimD0ing
Hanjung Kim
2 months
This work was done with amazing collaborators: @jae_hyun_kang, Hyolim Kang, @MeedEumCho, Seon Joo Kim, @YoungwoonLee . More details are available here. Project Page: Code: Paper:
2
0
5
@KimD0ing
Hanjung Kim
2 months
Finally, our embodiment-agnostic skill representation enables generation of cross-embodiment future frames. 6/n
Tweet media one
1
0
2
@KimD0ing
Hanjung Kim
2 months
It also successfully imitates unseen tasks from both human and robot prompts. 5/n
1
0
2
@KimD0ing
Hanjung Kim
2 months
Using the cross-embodiment skill representation, we train a skill-conditioned policy. This allows the policy to imitate prompt videos, even when they involve human demonstrations. 4/n
1
0
2