AIkingdome Profile Banner
Zaza Profile
Zaza

@AIkingdome

Followers
3
Following
375
Media
2
Statuses
32

AI engineer (ex. Google/Disney/Fortnite) Computer Vision, AI Health & Safety, 3D printing. I love robots.

San Francisco, CA
Joined December 2024
Don't wanna be here? Send us removal request.
@adcock_brett
Brett Adcock
18 hours
Testing the robot handing out swag before Christmas break🎄
67
109
947
@AyatoKanada
電通大金田研 | kanada lab
7 days
宇田君がIROSで発表したクローラロボットをyoutubeにアップしました。体の任意の点を曲げることで、狭路や段差を器用に踏破することができます。 https://t.co/mtfERj2CoA 実は大幅にパワーアップしたver.も既に開発しており、近いうちに公開出来ると思います。
0
7
15
@H0meMadeGarbage
HomeMadeGarbage
28 days
Robotic Olaf の肩をかみしめる なるほどなぁ ボールベアリングとボールジョイントの組合せで楽しめそうだ
4
38
325
@KyberLabsRobots
Kyber Labs
5 days
Demo showing our system doing autonomous assembly of a part! What else should we have it do? And if you want a hand, wait list is open here! https://t.co/bh9Xv05eje
77
188
1K
@ChongZitaZhang
C Zhang
5 days
Disney olaf robot. Classic mimic RL with nice modelling. Surprisingly small legs and big necks.
19
135
967
@svlevine
Sergey Levine
8 days
It turns out that VLAs learn to align human and robot behavior as we scale up pre-training with more robot data. In our new study at Physical Intelligence, we explored this "emergent" human-robot alignment and found that we could add human videos without any transfer learning!
18
69
748
@pascalefung
Pascale Fung
9 days
Introducing VL-JEPA: Vision-Language Joint Embedding Predictive Architecture for streaming, live action recognition, retrieval, VQA, and classification tasks with better performance and higher efficiency than large VLMs. • VL-JEPA is the first non-generative model that can
12
82
525
@MikeShou1
Mike Shou
13 days
(1/6) X-Humanoid 🤖: Scaling up data for Humanoid Robots. We convert human daily activity videos (from Ego-Exo4D) into humanoid videos (i.e., Tesla Optimus) performing tasks like cooking or fixing a bike. This data can be potentially used to train robot policies and world
23
74
414
@Majumdar_Ani
Anirudha Majumdar
12 days
Generalist robots need a generalist evaluator. But how do you test safety without breaking things? 💥 🌎 Introducing our new work from @GoogleDeepMind: Evaluating Gemini Robotics Policies in a Veo World Simulator https://t.co/ZjvpYXFddZ 🧵👇
26
94
562
@lukas_m_ziegler
Lukas Ziegler
15 days
Safety in mobile robotics! 👷🏼‍♂️ Robots navigating through crowds is one of the hardest problems in mobile robotics, uncertainty, human motion, and real-time constraints don’t mix well. A team at @tudelft has introduced DRA-MPPI, a new motion-planning method that lets robots move
19
159
1K
@IlirAliu_
Ilir Aliu - eu/acc
16 days
1.7× faster inverse kinematics in pure Python, GPU-accelerated! Fully open-source and built for scale [📍Bookmark for later] PyRoki, a modular toolkit for robot kinematic optimization, supporting - inverse kinematics - trajectory optimization, and - motion retargeting. Built
3
70
451
@YunzhuLiYZ
Yunzhu Li
19 days
Robotic oil painting powered by a learning-based pixel dynamics model (world-model style). 🤖🎨 The key part? Once you have a reasonably good model (even an approximate one) you unlock a ton of possibilities through inverse optimization, from motion planning and corrective
@RuohanZhang76
Ruohan Zhang
19 days
1/N 🎨🤖Given only a static image of an oil painting by an expert artist, can a robot infer the corresponding control actions, such as trajectory, orientation, and applied force, to accurately reproduce the painting? 🖌️Introducing IMPASTO: a robotic oil-painting system that
2
16
105
@Stone_Tao
Stone Tao
21 days
GPU parallelized envs have accelerated RL, but most implementations exhibit critical instability when running on-policy RL with short rollouts. We present Staggered Environment Resets. A few lines of code are all you need! Presenting today, 4:30PM poster 310 #NeurIPS2025 🧵(1/8)
7
8
151
@chris_j_paxton
Chris Paxton
20 days
Always important to remember that a lot of these robots are "faking" the humanlike motions -- its a property of how they're trained not an inherent property of the hardware. They're actually capable of way weirder stuff and way faster motions.
@chris_j_paxton
Chris Paxton
20 days
And today we have things like this: figure 03 running. This is a while body control neural net, presumably the same basic recipe from Tesla and Unitree videos we have seen. Amazing work from the figure team but running is now basically commoditized.
1K
3K
59K
@LeRobotHF
LeRobot
20 days
🚀 Introducing X-VLA ; LeRobot’s new soft-prompted Vision-Language-Action model. X-VLA is built to scale across many embodiments: different robots, cameras, action spaces, and environments, all handled by one unified transformer backbone. - Generalist across robots (Franka,
9
79
410
@liuziwei7
Ziwei Liu
22 days
📢#NeurIPS2025 Welcome to check out our work @NeurIPSConf📢 * 3D Gen - PhysX: https://t.co/JQQbV8PWgI * VLM - GUI-Reflection: https://t.co/mRpl6CaOAc - ShotBench: https://t.co/ftsZqQxqI4 * World Model - Spatial Mem: https://t.co/aiyloarmsH - Imagine360: https://t.co/VKrwbc0mXc
2
37
179
@YifanHou2
Yifan Hou
22 days
Can we quickly improve a pre-trained robot policy by learning from real world human corrections? Introducing Compliant Residual DAgger (CR-DAgger), a system that improves policies performance to close to 100% on challenging contact-rich manipulation problems, using as few as
4
52
217
@HaoTang_ai
Hao Tang (hiring postdocs)
22 days
🤖 New paper: MobileVLA-R1 A unified VLA system that brings real reasoning + continuous control to quadruped robots. CoT dataset, 2-stage training, real-world deployment. 📄paper & code & demo: https://t.co/w1FqDRS5K4
3
50
244
@realpranavp
Pranav Parthasarathy
3 months
@faraz_r_khan @spectral_hq Plugin is internal. We are working with a select group of partners to make it and an API available. Please let us know if you are interested (dm). We also have a hugging face space where you can play with a preview of the model today.
Tweet card summary image
huggingface.co
1
1
2