Kevin Zakka Profile
Kevin Zakka

@kevin_zakka

Followers
17K
Following
9K
Media
335
Statuses
3K

phding @Berkeley_AI

Berkeley, CA
Joined January 2016
Don't wanna be here? Send us removal request.
@kevin_zakka
Kevin Zakka
2 months
It was a joy bringing Jason’s signature spin-kick to life on the @UnitreeRobotics G1. We trained it in mjlab with the BeyondMimic recipe but had issues on hardware last night (the IMU gyro was saturating). One more sim-tuning pass and we nailed it today. With @qiayuanliao and
@xbpeng4
Jason Peng
3 months
Implementing motion imitation methods involves lots of nuisances. Not many codebases get all the details right. So, we're excited to release MimicKit! https://t.co/7enUVUkc3h A framework with high quality implementations of our methods: DeepMimic, AMP, ASE, ADD, and more to come!
26
93
660
@ZhiSu22
Zhi Su
13 hours
HITTER in mjlab @kevin_zakka
2
4
34
@BookCameo
Cameo
2 days
Sleigh the season with the most personal gift around. Get them a Cameo video!
10
16
119
@Sentdex
Harrison Kinsley
3 days
New video is out, teaching a Unitree G1 humanoid to walk using reinforcement learning (PPO). First time I've got sim2real to work with robotics, sharing what I've learned and testing out how good the policy actually is by walking around outside on some semi challenging terrain.
13
6
160
@kevin_zakka
Kevin Zakka
4 days
mink 1.0.0 is out! You can now enforce tasks as strict QP equalities. One neat application is "locking" certain DoFs. In this example, joints 1 & 2 of the Panda are locked when the cube is red and then unlocked when it turns green.
2
20
186
@kevin_zakka
Kevin Zakka
6 days
We use ty in mjlab ⚡️
@charliermarsh
Charlie Marsh
6 days
Announcing the Beta release of ty: an extremely fast type checker and language server for Python, written in Rust. We now use ty exclusively in our own projects and are ready to recommend it to motivated users. 10x, 50x, even 100x faster than existing type checkers and LSPs.
1
1
29
@worldcoin
World
11 days
“It’s really hard to both identify unique people and to do that in a privacy-preserving way…having a human-centric system turned out to be way more important in the world today than we realized.” - Sam Altman
70
127
518
@Boyiliee
Boyi Li
6 days
Introducing FoundationMotion. A large-scale, video-derived motion annotation dataset & auto-labeling pipeline + advanced models for motion understanding. Fully open-source: code, datasets, and models, free to use and build on. Understanding motion is core to physical reasoning,
4
71
425
@qiyang_li
Qiyang (Colin) Li
10 days
Action chunking is drawing growing interest in RL, yet its theoretical properties are still understudied. We are excited to share some insights on when we should use action chunking in Q-learning + a new algo (DQC) to tackle hard long-horizon tasks! https://t.co/izVWQBgH3c🧵1/N
6
54
298
@zacamaso
zaca
12 days
Got the manipulation task in #mjlab working with the #SO101 arm. Only required a few modifications to @kevin_zakka's very elegant code. Trains in ~1-2 hours on a 3090 at ~70-100k steps/second. Looking forward to the pixel-in update and sim2real transfer. Code:
@kevin_zakka
Kevin Zakka
21 days
Happy Monday! We've added a cube lifting example using the YAM arm as a pedagogical reference for setting up manipulation tasks in mjlab. The reward is extremely simple and produces an elegant policy.
6
8
148
@brenthyi
Brent Yi
12 days
tyro 1.0 is out 🐣 This has been a pet project/niche interest of mine for ~4 years now, so it's a bit of a sentimental moment... https://t.co/bAibP3RjxE
Tweet card summary image
github.com
CLI interfaces & config objects, from types. Contribute to brentyi/tyro development by creating an account on GitHub.
11
21
172
@TetherIA_ai
TetherIA.ai
1 month
We built a high-fidelity, cable-driven robotic hand in MuJoCo — accurately reproducing the full internal tendon system 🤖🧩 It’s now officially part of the MuJoCo Menagerie. On top of it, we developed a zero-shot RL training & real-world deployment pipeline based on MuJoCo
8
27
160
@elijahliststeve
Steve Shultz
15 days
God sends a word from the future into the present, and everything changes. Watch Robin Bullock Follow for more
0
24
135
@QuantummCookie
Yunhao Cao
14 days
Introducing Correspondence-Oriented Imitation Learning (COIL), a self-supervised framework for robotic manipulation using 3D keypoint trajectories as a flexible, unified task representation!
5
9
54
@kevin_zakka
Kevin Zakka
14 days
We've setup nightly benchmarking for mjlab to track env/physics throughput and policy health over time. Every night we train + evaluate a tracking policy on the G1, measuring task performance and sim throughput across multiple tasks. https://t.co/gfLK3HBOUU
2
2
91
@kevin_zakka
Kevin Zakka
17 days
Shoutout to @mustafa_hdb who has been doing all the tiled rendering magic in mujoco_warp!
0
0
4
@kevin_zakka
Kevin Zakka
17 days
Edit: the policy above is trained with depth. Here's the RGB policy!
1
1
28
@jeffdornik
Jeff Dornik
8 hours
Big Tech Is Quietly Replacing Human Speech With AI Control Interview on The Vic Porcelli Show on @NewstalkSTL with @yovic7 & @kenjwilliams https://t.co/O7Bs6nT8SJ
0
2
2
@kevin_zakka
Kevin Zakka
17 days
Coming soon to mjlab and a long time in the making: RGB-D camera rendering! We can solve cube lifting with the YAM arm from 32×32 RGB frames in about <5 minutes of wall-clock time. Here's a clip showing emergent "search" behavior along with our upcoming viser visualization.
12
38
360
@Sentdex
Harrison Kinsley
19 days
@kevin_zakka thanks, it's been a pleasure workin with mjlab!
0
1
4
@ChaoyiPan
Chaoyi Pan
19 days
Generative models (diffusion/flow) are taking over robotics 🤖. But do we really need to model the full action distribution to control a robot? We suspected the success of Generative Control Policies (GCPs) might be "Much Ado About Noising." We rigorously tested the myths. 🧵👇
15
87
518
@Sentdex
Harrison Kinsley
19 days
Geoff the G1 preparing to go offroading IRL. I did a terrible job at the reward function here and was actually just tuning in to see all of what was broken and instead found a pretty good model. The robots just want to learn.
9
11
176
@edward_s_hu
Edward Hu
21 days
Happy to announce our neurips’25 paper, real world RL of active perception behaviors! I am pretty excited about this project - I learned that real world robot RL is actually quite straightforward. Details below:
2
25
202
@awinkler_
Alexander Winkler
21 days
For any PhD students looking for a summer internship to work with me on RL, VLAs and humanoid robots in SF, please consider applying. https://t.co/adODiI9gja
8
26
278