Lucas Maes
@lucasmaes_
Followers
273
Following
760
Media
15
Statuses
233
PhD student, self-supervised world models @Mila_Quebec
Montréal
Joined January 2020
🚨 New World Models Workshop! Thrilled to be hosting this event @Mila_Quebec! I’m convinced this is exactly what the field needs right now to push our community forward. Please RT + apply if you’re into world models! 🚀
🚨Announcing the World Modeling Workshop 2026 🚨 📅 When: Feb 4–6, 2026 📍Where: Mila (Montréal) + Online (free) 💡 What: Keynotes, Methods Deep Dive, and Tutorials 🌐 https://t.co/WukFtNON3o ✉️ worldmodel.mila@gmail.com 🧵 Details below:
0
3
10
A world model with the horrible bottleneck induced by text tokens for reasoning ... Still impressive though! But it could be so much better 😔 https://t.co/u9q3NqpAXj
deepmind.google
Introducing SIMA 2, the next milestone in our research creating general and helpful AI agents. By integrating the advanced capabilities of our Gemini models, SIMA is evolving from an instruction-foll…
0
0
3
LeJEPA 🔥 No EMA, No Stop-grad, No Expander, No Masking, ... No Collapse. This is the future of Self-Supervised Learning.
LeJEPA: a novel pretraining paradigm free of the (many) heuristics we relied on (stop-grad, teacher, ...) - 60+ arch., up to 2B params - 10+ datasets - in-domain training (>DINOv3) - corr(train loss, test perf)=95% Paper: https://t.co/NpfB9G1pOP Code: https://t.co/BsK5wmNEHc
0
7
92
CDS Sr. Research Scientist @cosmo_shirley and CDS Prof. @ylecun are keynoting the 2026 World Modeling Workshop (Feb 4–6 at Mila), with @NYU_Courant's @sherryyangML, and others. The workshop spans SSL, RL, robotics, LLMs & more. 📍 Montréal + online 🔗 https://t.co/94DOjVhgFa
4
16
84
🚨 Interested in generative world models? We’re thrilled to host Stephen Spenser (@GoogleDeepMind) at the World Modeling Workshop 2026, where he’ll talk about the Genie series of models! 🌐 https://t.co/inI2YV2Dsl
3
17
115
Sharing our work at @NeurIPSConf on reasoning with EBMs! We learn an EBM over simple subproblems and combine EBMs at test-time to solve complex reasoning problems (3-SAT, graph coloring, crosswords). Generalizes well to complex 3-SAT / graph coloring/ N-queens problems.
9
43
373
(1/2) say you want to plan your way back from honolulu to nyc using a WM in pixels- 1. the world is stochastic and partially observable. you can plan how to pack your suitcase and leave the hotel room, but 99% of your pixel-level plan afterwards is useless
@_amirbar Why not?
9
5
86
Hype breeds chaos. It’s time to name and classify the many recipes for building world models.
0
0
4
I've just watched this interview, very interesting to hear that 3 years later. Would love to hear how your ideas have evolved since @danijarh
https://t.co/J0Mhtro41E
0
0
7
Should you use discrete or continuous embeddings for world models? I just came across this video from @ejmejm1 - highly recommend it! https://t.co/gi1a3VivOn
0
0
2
Just arrived at the @ICCVConference , and we will present this paper at Poster Session 5, on Thursday (23rd) at 10:45 AM. Hit me up if you'd like to chat about anything SSL and beyond!
Our paper "Beyond [cls]: Exploring the True Potential of Masked Image Modeling Representations" has been accepted to @ICCVConference! 🧵 TL;DR: Masked image models (like MAE) underperform not just because of weak features, but because they aggregate them poorly. [1/7]
0
2
6
I am heading to @Mila_Quebec for 2 months to work together with Dhanya Sridhar, Simon Lacoste-Julien and their groups. Reach out if you want to talk about (causal) representation learning, OOD generalization, self-supervised learning, and compositionality—in LLMs and beyond.
2
3
24
Essentially, the entire narrative of @ylecun over the past couple of years...
It's pretty funny that we are converging on the reality that will upset the maximum amount of people: LLMs are pretty good. They're not transformative, and also not useless. There are some tasks where LLMs will save you a lot of time. And that's it
22
39
686
Are reward functions a must for intelligence? What are the alternatives?
2
0
1
The viral new "Definition of AGI" paper has fake citations which do not exist. And it specifically TELLS you to read them! Proof: different articles present at the specified journal/volume/page number, and their titles exist nowhere on any searchable repository.
103
220
2K
Is there such a thing as too many agents in multi-agent systems? It depends! 🧵 Our work reveals 3 distinct regimes where communication patterns differ dramatically. More on our findings below 👇 (1/7)
1
8
22
🚨 Interested in world models, time series, and finance? We’re thrilled to host @cbbruss (Capital One) at the World Modeling Workshop 2026. 💬 "Time and Money – Modeling the World of Consumer Finance” 📍 Mila, Montreal 🌐 https://t.co/inI2YV2Dsl 👉 https://t.co/HvVbVQKyZg
0
1
7
🕳️🐇Into the Rabbit Hull – Part II Continuing our interpretation of DINOv2, the second part of our study concerns the geometry of concepts and the synthesis of our findings toward a new representational phenomenology: the Minkowski Representation Hypothesis
5
68
381
🕳️🐇Into the Rabbit Hull – Part I (Part II tomorrow) An interpretability deep dive into DINOv2, one of vision’s most important foundation models. And today is Part I, buckle up, we're exploring some of its most charming features.
10
121
643
One can manipulate LLM rankings to put any model in the lead—only by modifying the single character separating demonstration examples. Learn more in our new paper https://t.co/D8CzSpPxMU w/ Jingtong Su, Jianyu Zhang, @karen_ullrich , and Léon Bottou. 1/3 🧵
1
3
11