Christian Gumbsch Profile
Christian Gumbsch

@cgumbsch

Followers
197
Following
112
Media
18
Statuses
55

Postdoc @UvA_Amsterdam | world models and sensorimotor abstractions |👾🤖🧠

Joined October 2021
Don't wanna be here? Send us removal request.
@cgumbsch
Christian Gumbsch
2 years
World models are key for developing adaptive agents. In our #ICLR2024 spotlight we present THICK: an algorithm to learn hierarchical world models with versatile temporal abstractions. And we show how they can enhance model-based RL or planning… 📜 https://t.co/9SVk5TEis5 1/🧵
1
34
131
@ZadaianchukML
Andrii Zadaianchuk 🇺🇦
3 months
🤖 🌍 We have extended the submission to our CoRL (@corl_conf) workshop about Robot World Models for one more week! Use your chance to submit! 🌏🤖 ❗️Both 1-page abstracts of published works and 4-page novel works in progress are welcome! 📡 Details: https://t.co/XjeFO3YnBn
@ZadaianchukML
Andrii Zadaianchuk 🇺🇦
4 months
🚀 We’re excited to announce our #CoRL2025 workshop: Learning to Simulate Robot Worlds Spanning high-fidelity simulators, digital twins, and learned world models - our goal is to unite communities to push robot learning forward 🤖🌐 🔗 https://t.co/AjKxrN5DId 🧵🧵🧵
0
4
13
@maxseitzer
Max Seitzer
3 months
Introducing DINOv3 🦕🦕🦕 A SotA-enabling vision foundation model, trained with pure self-supervised learning (SSL) at scale. High quality dense features, combining unprecedented semantic and geometric scene understanding. Three reasons why this matters…
12
141
1K
@ZadaianchukML
Andrii Zadaianchuk 🇺🇦
4 months
Are you working on real-to-sim, sim-to-real, learning world models, or using physics-based simulators? There are two weeks left until the submission deadline for our CoRL workshop, Learning to Simulate Robot Worlds. More details here: 🔗 https://t.co/XjeFO3YnBn
0
2
9
@ZadaianchukML
Andrii Zadaianchuk 🇺🇦
4 months
🚀 We’re excited to announce our #CoRL2025 workshop: Learning to Simulate Robot Worlds Spanning high-fidelity simulators, digital twins, and learned world models - our goal is to unite communities to push robot learning forward 🤖🌐 🔗 https://t.co/AjKxrN5DId 🧵🧵🧵
1
17
59
@TuranOrujlu
Turan Orujlu
4 months
Reframing attention as an RL problem for causal discovery AI models like GNNs & Transformers can struggle with dynamic causal reasoning. Our work introduces the Causal Process Model (CPM), which reframes attention as an RL problem. Agents dynamically build sparse causal graphs.
1
2
6
@GMartius
Georg Martius
4 months
@svlevine was just presenting in the Exploration in AI @ #ICML2025 and promoted that exploration needs to be grounded, and that VLMs are a good source ;-) Check our paper below 👇
@CcansuSancaktar
Cansu Sancaktar
4 months
✨Introducing SENSEI✨ We bring semantically meaningful exploration to model-based RL using VLMs. With intrinsic rewards for novel yet useful behaviors, SENSEI showcases strong exploration in MiniHack, Pokémon Red & Robodesk. Accepted at ICML 2025🎉 Joint work with @cgumbsch 🧵
0
5
27
@th_rupf
Thomas Rupf
4 months
Zero-shot imitation from just a single sparse demonstration is hard. Goal-conditioned methods tend to “greedily" move from one state to the next and lose the big picture. We're presenting an alternative approach on Tuesday at #ICML2025. (1/3)
1
7
17
@mar_baga
Marco Bagatella
4 months
When multiple tasks need improvements, fine-tuning a generalist policy becomes tricky. How do we allocate a demonstration budget across a set of tasks of varied difficulty and familiarity? We are presenting a possible solution at ICML on Wednesday! (1/3)
1
8
17
@CcansuSancaktar
Cansu Sancaktar
4 months
✨Introducing SENSEI✨ We bring semantically meaningful exploration to model-based RL using VLMs. With intrinsic rewards for novel yet useful behaviors, SENSEI showcases strong exploration in MiniHack, Pokémon Red & Robodesk. Accepted at ICML 2025🎉 Joint work with @cgumbsch 🧵
2
36
152
@ZadaianchukML
Andrii Zadaianchuk 🇺🇦
5 months
How to represent dynamic real-world data both consistently and efficiently, while reflecting the compositional object-centric structure of the world? Contrast your slots! ...with our new SlotContrast method(🚀#CVPR2025 Oral🚀)! 🌐website: https://t.co/UXeYp8qyAy 🧵🧵🧵 1/n
1
8
29
@AAchimova
Asya Achimova
1 year
#Context plays a critical role not only in interpreting language but many other cognitive processes. In a new paper, we propose that context-sensitivity is a core feature of human memory that enables flexible planning, #generalization, and decision making https://t.co/LigJbVD4Dc
3
33
130
@vlastelicap
Marin Vlastelica 🤖🎸
1 year
Were you ever wondering why there are little-to-no diverse offline imitation learning algorithms 🤔? Then we've got something 4 you! Our paper, "Offline Diversity Maximization under Imitation Constraints" is being presented today at RLC2024 🎇! 🧵 https://t.co/NTOedhW9vD
1
18
76
@IMOLNeurIPS2024
IMOL Workshop | NeurIPS 2024
1 year
🚨We're back!🚨 Excited to announce The 2nd IMOL Workshop at #neurips2024! Send us your 📜newest work📜on learning & exploration in artificial and biological agents. #CFP at https://t.co/6KDwrtx36b 🔄Please share widely & stay tuned for more programming announcements!
1
4
40
@NriaArmengol2
Núria Armengol
1 year
Tired of causally confused agents when learning from offline datasets? We propose 🚣🏼‍♀️CAIAC🚣🏼‍♀️, a method for counterfactual data augmentation to improve the robustness of offline learning agents against extreme distributional shifts at test time. 🧵
1
14
64
@pietromazzaglia
Pietro Mazzaglia
1 year
🚨Introducing GenRL! An embodied AI agent that learns multimodal foundation world models 🌍 By connecting the multimodal knowledge of foundation models with the embodied knowledge of world models for RL, GenRL enables turning vision and language prompts into actions!
4
56
255
@cgumbsch
Christian Gumbsch
2 years
For more details check out our paper or our code. Paper 📜: https://t.co/9SVk5TEis5 Code 🐍: https://t.co/UqFIbp48Lt Many thanks to @nsajidt, @GMartius, and @mvbutz! 8/8
Tweet card summary image
github.com
Contribute to CognitiveModeling/THICK development by creating an account on GitHub.
0
2
7
@cgumbsch
Christian Gumbsch
2 years
THICK PlaNet 🪐: We can also plan directly with our model. In THICK PlaNet we first plan on the high level with MCTS and then we search for low-level actions to follow this plan. This is useful for long-horizon or hierarchical tasks and sparse rewards. 7/8
1
0
1
@cgumbsch
Christian Gumbsch
2 years
THICK Dreamer 😴: Hierarchical predictions are useful for various applications. For example, in THICK Dreamer we train an actor-critic through hierarchical imaginations. The high-level predictions account for long-horizon outcomes. This can increase sample efficiency. 6/8
1
0
3
@cgumbsch
Christian Gumbsch
2 years
Level 2 🌐: The high-level world model is trained to predict situations that prompt context changes. The model maintains a categorical distribution of high-level „actions“ A to disentangle different temporal abstract outcomes. 5/8
1
0
2
@cgumbsch
Christian Gumbsch
2 years
Context ©️: C-RSSM learns to update its context only in situations that are critical for accurate predictions. The context vectors, shown in grayscale here, change for example upon finding items, room exits, door openings, etc. 4/8
1
0
1