Marc Rigter
@MarcRigter
Followers
778
Following
668
Media
13
Statuses
150
Researcher @SkildAI. Previously: @MSFTResearch @oxfordrobots.
Pittsburgh, PA
Joined March 2021
I joined Skild AI late last year and we've been making swift progress towards more general robots! To record these videos we took the robot around town to locations it had never seen before with no prior preparation or planning.
We’ve all seen humanoid robots doing backflips and dance routines for years. But if you ask them to climb a few stairs in the real world, they stumble! We took our robot on a walk around town to environments that it hadn’t seen before. Here’s how it works🧵⬇️
1
4
37
Incredible to watch a single model figure things out on the fly. If you'd like to chat about this (or anything else related to Skild) I'll be at CoRL this week!
We built a robot brain that nothing can stop. Shattered limbs? Jammed motors? If the bot can move, the Brain will move it— even if it’s an entirely new robot body. Meet the omni-bodied Skild Brain:
0
1
15
Truly general-purpose robots must be able to navigate spaces they have never seen before. ⦿ Skild Brain enables end-to-end autonomous locomotion from raw vision and joint inputs, without mapping or pre-planning. ⦿ The model adapts in real time to new terrain such as stairs,
26
89
465
I had the pleasure of visiting the Skild lab in Pittsburgh about a month ago. It’s easily one of the most futuristic places I’ve seen, packed with robots everywhere -- each busy learning, testing, or solving customer use cases. Even humanoids - just about every piece of humanoid
Skild has been quiet since emerging from stealth in July 2024. They just shared their journey so far, showcasing early milestones. Skild is building toward a single general-purpose, “omni-bodied” robotics foundation model: One Brain – any task, any robot. The Skild Brain
15
42
321
Skild AI is officially out of stealth! I can't wait for us to release more recent results in the coming weeks 🙂
Modern AI is confined to the digital world. At Skild AI, we are building towards AGI for the real world, unconstrained by robot type or task — a single, omni-bodied brain. Today, we are sharing our journey, starting with early milestones, with more to come in the weeks ahead.
0
2
28
We’ve been building quietly — starting tomorrow, we go live. Here’s a teaser of what we did before Skild AI. It has shaped what’s coming next. 07/29. Stay tuned.
32
122
694
The code is open-source and is available here: https://t.co/IWerpl3Tvq 6/n
1
0
3
The arXiV preprint is currently in limbo pending moderation (probably because of how I categorised it, oops!) so for now you can find the preprint here: https://t.co/KnRwgbwt0M 5/n
sites.google.com
AVID: Adapting Video Diffusion Models to World Models Marc Rigter, Tarun Gupta, Agrin Hilmkil, Chao Ma Overview | Code | Paper
2
0
5
Unsurprisingly, fine-tuning works very well and is the strongest approach if the pretrained weights are available. 4/n
1
0
2
The AVID adapter is necessary to generate accurate motion in the video, while the pretrained model improves visual coherence. In contrast, training an action-conditioned video model from scratch with the same compute results in videos that lack consistency between frames. 3/n
1
0
3
Many leading video models are closed-source and cannot be fine-tuned. Therefore, we focus on adapting pretrained models without access to their weights. We train a lightweight adapter that “guides” the pretrained model to the correct action-conditioned generation. 2/n
2
0
5
Video models like Sora and Gen 3 can generate realistic videos, but can they produce useful synthetic data for planning/RL? Our work (AVID) explores how pretrained image-to-video models can be adapted to accurate action-conditioned world models. 1/n
7
19
106
I'm at ICLR in Vienna this week to present our work on data acquisition for learning general world models! Poster #142, Hall B, Fri 3:30pm Work with @MinqiJiang and @IngmarPosner
How do we create robust agents that generalise well to a wide range of different environments and future tasks? Our new #ICLR paper poses this problem as learning a robust world model...
0
0
21
Reward-Free Curricula for Training Robust World Models (led by @MarcRigter) Fri 4:30pm, Hall B #142
How do we create robust agents that generalise well to a wide range of different environments and future tasks? Our new #ICLR paper poses this problem as learning a robust world model...
1
2
7
This work was recently published in TMLR and the final version is available: https://t.co/RymUOYyRf0 I think diffusing entire on-policy trajectories is a really compelling approach to world modelling, so I’m very excited about this line of work!
Autoregressive next-token prediction is not enough: reliable AI agents are going to require accurate models of the world. I’m excited to share a new approach to world modeling that does not require autoregressive sampling: “World Models via Policy-Guided Trajectory Diffusion”…
0
3
23