Brent Yi Profile
Brent Yi

@brenthyi

Followers
1K
Following
1K
Media
10
Statuses
116

PhD student @berkeley_ai ๐Ÿ‘‹

Joined November 2009
Don't wanna be here? Send us removal request.
@Winniechen02
Feng Chen
4 days
๐Ÿ˜ฎโ€๐Ÿ’จ๐Ÿค–๐Ÿ’ฅ Tired of building dexterous tasks by hand, collecting data forever, and still fighting with building the simulator environment? Meet GenDexHand โ€” a generative pipeline that creates dex-hand tasks, refines scenes, and learns to solve them automatically. No hand-crafted
1
24
71
@_sdbuchanan
Sam Buchanan
8 days
Manifold Muon stabilizes large model training, but it's expensive ๐Ÿ’ฐ -- requiring an inner loop solve for each update. ๐Ÿ’ก But you can significantly accelerate it, leading to 2.3x speedup on @thinkymachines's experiment with no performance loss! Blog and ๐Ÿงตbelowโ€ฆ
5
25
268
@ChungMinKim
Chung Min Kim
19 days
@brenthyi and I are at #IROS2025 to present PyRoki ๐Ÿซถ Find us either: - tomorrow (tues 10:30am) in room 102C, or - anytime during the conference!
@ChungMinKim
Chung Min Kim
6 months
Excited to introduce PyRoki ("Python Robot Kinematics"): easier IK, trajectory optimization, motion retargeting... with an open-source toolkit on both CPU and GPU
1
3
67
@ruoshi_liu
Ruoshi Liu
24 days
Everyone says they want general-purpose robots. We actually mean it โ€” and weโ€™ll make it weird, creative, and fun along the way ๐Ÿ˜Ž Recruiting PhD students to work on Computer Vision and Robotics @umdcs for Fall 2026 in the beautiful city of Washington DC!
28
73
469
@kevin_zakka
Kevin Zakka
29 days
We open-sourced the full pipeline! Data conversion from MimicKit, training recipe, pretrained checkpoint, and deployment instructions. Train your own spin kick with mjlab: https://t.co/KvNQn0Edzr
Tweet card summary image
github.com
Train a Unitree G1 humanoid to perform a double spin kick using mjlab - mujocolab/g1_spinkick_example
7
78
389
@kevin_zakka
Kevin Zakka
30 days
It was a joy bringing Jasonโ€™s signature spin-kick to life on the @UnitreeRobotics G1. We trained it in mjlab with the BeyondMimic recipe but had issues on hardware last night (the IMU gyro was saturating). One more sim-tuning pass and we nailed it today. With @qiayuanliao and
@xbpeng4
Jason Peng
1 month
Implementing motion imitation methods involves lots of nuisances. Not many codebases get all the details right. So, we're excited to release MimicKit! https://t.co/7enUVUkc3h A framework with high quality implementations of our methods: DeepMimic, AMP, ASE, ADD, and more to come!
25
93
653
@_sdbuchanan
Sam Buchanan
1 month
We wrote a book about representation learning! Itโ€™s fully open source, available and readable online, and covers everything from theoretical foundations to practical algorithms. ๐Ÿ‘ทโ€โ™‚๏ธ Weโ€™re hard at work updating the content for v2.0, and would love your feedback and contributions
13
204
1K
@younggyoseo
Younggyo Seo
1 month
I've recently started working on humanoids and this project set the bar super high for me ๐Ÿ˜‚ Very impressive demo, and our team is working on many interesting projects, exciting time!
@zhenkirito123
Zhen Wu
1 month
Humanoid motion tracking performance is greatly determined by retargeting quality! Introducing ๐—ข๐—บ๐—ป๐—ถ๐—ฅ๐—ฒ๐˜๐—ฎ๐—ฟ๐—ด๐—ฒ๐˜๐ŸŽฏ, generating high-quality interaction-preserving data from human motions for learning complex humanoid skills with ๐—บ๐—ถ๐—ป๐—ถ๐—บ๐—ฎ๐—น RL: - 5 rewards, - 4 DR
1
4
43
@brenthyi
Brent Yi
1 month
New project from @kevin_zakka I've been using + helping with! Ridiculously easy setup, typed codebase, headless vis => happy ๐Ÿ˜Š
@kevin_zakka
Kevin Zakka
1 month
I'm super excited to announce mjlab today! mjlab = Isaac Lab's APIs + best-in-class MuJoCo physics + massively parallel GPU acceleration Built directly on MuJoCo Warp with the abstractions you love.
1
3
37
@akanazawa
Angjoo Kanazawa
1 month
Congratulations to the videomimic team for winning the best student paper award at CoRL 2025 ๐Ÿฅน๐ŸŽ‰ Grateful to the CoRL community for the recognition!
@arthurallshire
Arthur Allshire
6 months
our new system trains humanoid robots using data from cell phone videos, enabling skills such as climbing stairs and sitting on chairs in a single policy (w/ @redstone_hong @junyi42 @davidrmcall)
6
23
250
@brenthyi
Brent Yi
1 month
๐Ÿ˜ฎ
@zhenkirito123
Zhen Wu
1 month
Humanoid motion tracking performance is greatly determined by retargeting quality! Introducing ๐—ข๐—บ๐—ป๐—ถ๐—ฅ๐—ฒ๐˜๐—ฎ๐—ฟ๐—ด๐—ฒ๐˜๐ŸŽฏ, generating high-quality interaction-preserving data from human motions for learning complex humanoid skills with ๐—บ๐—ถ๐—ป๐—ถ๐—บ๐—ฎ๐—น RL: - 5 rewards, - 4 DR
0
0
6
@larsankile
Lars Ankile
1 month
How can we enable finetuning of humanoid manipulation policies, directly in the real world? In our new paper, Residual Off-Policy RL for Finetuning BC Policies, we demonstrate real-world RL on a bimanual humanoid with 5-fingered hands (29 DoF) and improve pre-trained policies
8
50
230
@philippswu
Philipp Wu
2 months
It's been over 2 years since we first released GELLO. Weโ€™ve updated the project with 3 new GELLOs to control the YAM, ARX and similar arms! Enables easy data collection for policy training (ie finetuned Pi0!). All open source contributions from @jeffreyliu05 and @hmahesh007!
5
24
148
@marionlepert
Marion Lepert
3 months
Introducing Masquerade ๐ŸŽญ: We edit in-the-wild videos to look like robot demos, and find that co-training policies with this data achieves much stronger performance in new environments. โ—Note: No real robots in these videosโ—Itโ€™s all ๐Ÿ’ช๐Ÿผ โžก๏ธ ๐Ÿฆพ ๐Ÿงต1/6
18
38
280
@QianqianWang5
Qianqian Wang
3 months
๐Ÿ“ขThrilled to share that I'll be joining Harvard and the Kempner Institute as an Assistant Professor starting Fall 2026! I'll be recruiting students this year for the Fall 2026 admissions cycle. Hope you apply!
@KempnerInst
Kempner Institute at Harvard University
3 months
We are thrilled to share the appointment of @QianqianWang5 as an #KempnerInstitute Investigator! She will bring her expertise in computer vision to @Harvard. Read the announcement: https://t.co/Aoh6A5gp9B @hseas #AI #ComputerVision
101
41
724
@qiayuanliao
Qiayuan Liao
3 months
Want to achieve extreme performance in motion trackingโ€”and go beyond it? Our preprint tech report is now online, with open-source code available!
36
241
1K
@ZeYanjie
Yanjie Ze
3 months
Excited to open-source GMR: General Motion Retargeting. Real-time human-to-humanoid retargeting on your laptop. Supports diverse motion formats & robots. Unlock whole-body humanoid teleoperation (e.g., TWIST). video with ๐Ÿ”Š
22
113
697
@guanqi_he
Guanqi He
3 months
SPI-Active is accepted by CoRL2025, see you at Seoul ^โ€†_โ€†^ https://t.co/NmgvDg08rv
@guanqi_he
Guanqi He
6 months
How can we align simulation with real-world dynamics for legged robots? Check out our new work: SPI-Active โ€” Sampling-based system identification with active exploration for sim-to-real transfer in legged systems. We leverage sampling-based optimization to estimate robot
0
4
43
@Almorgand
Alexandre Morgand
3 months
"Cameras as Relative Positional Encoding" TLDR: comparison for conditioning transformers on cameras: token-level raymap, attention-level relative pose encodings, a (new) relative encoding Projective Positional Encoding -> camera frustums, (int|ext)insics for relative pos encoding
2
51
465
@redstone_hong
Hongsuk Benjamin Choi
3 months
๐Ÿค– Initial code release is up for VideoMimic Real2Sim! https://t.co/jKGM3lyRCq VideoMimic is a real-to-sim-to-real pipeline for deploying humanoids in the real world. It supports: - Human motion capture from video - Environment reconstruction for simulation from video -
Tweet card summary image
github.com
Visual Imitation Enables Contextual Humanoid Control. arXiV, 2025. - hongsukchoi/VideoMimic
@arthurallshire
Arthur Allshire
6 months
our new system trains humanoid robots using data from cell phone videos, enabling skills such as climbing stairs and sitting on chairs in a single policy (w/ @redstone_hong @junyi42 @davidrmcall)
3
35
166