lucasmaes_ Profile Banner
Lucas Maes Profile
Lucas Maes

@lucasmaes_

Followers
273
Following
760
Media
15
Statuses
233

PhD student, self-supervised world models @Mila_Quebec

Montréal
Joined January 2020
Don't wanna be here? Send us removal request.
@lucasmaes_
Lucas Maes
2 months
🚨 New World Models Workshop! Thrilled to be hosting this event @Mila_Quebec! I’m convinced this is exactly what the field needs right now to push our community forward. Please RT + apply if you’re into world models! 🚀
@worldmodel_26
World Modeling Workshop 2026
2 months
🚨Announcing the World Modeling Workshop 2026 🚨 📅 When: Feb 4–6, 2026 📍Where: Mila (Montréal) + Online (free) 💡 What: Keynotes, Methods Deep Dive, and Tutorials 🌐 https://t.co/WukFtNON3o ✉️ worldmodel.mila@gmail.com 🧵 Details below:
0
3
10
@lucasmaes_
Lucas Maes
4 days
LeJEPA 🔥 No EMA, No Stop-grad, No Expander, No Masking, ... No Collapse. This is the future of Self-Supervised Learning.
@randall_balestr
Randall Balestriero
4 days
LeJEPA: a novel pretraining paradigm free of the (many) heuristics we relied on (stop-grad, teacher, ...) - 60+ arch., up to 2B params - 10+ datasets - in-domain training (>DINOv3) - corr(train loss, test perf)=95% Paper: https://t.co/NpfB9G1pOP Code: https://t.co/BsK5wmNEHc
0
7
92
@NYUDataScience
NYU Center for Data Science
6 days
CDS Sr. Research Scientist @cosmo_shirley and CDS Prof. @ylecun are keynoting the 2026 World Modeling Workshop (Feb 4–6 at Mila), with @NYU_Courant's @sherryyangML, and others. The workshop spans SSL, RL, robotics, LLMs & more. 📍 Montréal + online 🔗 https://t.co/94DOjVhgFa
4
16
84
@worldmodel_26
World Modeling Workshop 2026
13 days
🚨 Interested in generative world models? We’re thrilled to host Stephen Spenser (@GoogleDeepMind) at the World Modeling Workshop 2026, where he’ll talk about the Genie series of models! 🌐 https://t.co/inI2YV2Dsl
3
17
115
@du_yilun
Yilun Du
19 days
Sharing our work at @NeurIPSConf on reasoning with EBMs! We learn an EBM over simple subproblems and combine EBMs at test-time to solve complex reasoning problems (3-SAT, graph coloring, crosswords). Generalizes well to complex 3-SAT / graph coloring/ N-queens problems.
9
43
373
@_amirbar
Amir Bar
21 days
(1/2) say you want to plan your way back from honolulu to nyc using a WM in pixels- 1. the world is stochastic and partially observable. you can plan how to pack your suitcase and leave the hotel room, but 99% of your pixel-level plan afterwards is useless
@YGandelsman
Yossi Gandelsman
22 days
@_amirbar Why not?
9
5
86
@lucasmaes_
Lucas Maes
23 days
Hype breeds chaos. It’s time to name and classify the many recipes for building world models.
0
0
4
@lucasmaes_
Lucas Maes
26 days
I've just watched this interview, very interesting to hear that 3 years later. Would love to hear how your ideas have evolved since @danijarh https://t.co/J0Mhtro41E
0
0
7
@lucasmaes_
Lucas Maes
28 days
Should you use discrete or continuous embeddings for world models? I just came across this video from @ejmejm1 - highly recommend it! https://t.co/gi1a3VivOn
0
0
2
@pszwnzl
Marcin Przewięźlikowski
28 days
Just arrived at the @ICCVConference , and we will present this paper at Poster Session 5, on Thursday (23rd) at 10:45 AM. Hit me up if you'd like to chat about anything SSL and beyond!
@pszwnzl
Marcin Przewięźlikowski
5 months
Our paper "Beyond [cls]: Exploring the True Potential of Masked Image Modeling Representations" has been accepted to @ICCVConference! 🧵 TL;DR: Masked image models (like MAE) underperform not just because of weak features, but because they aggregate them poorly. [1/7]
0
2
6
@rpatrik96
Patrik Reizinger@Mila
28 days
I am heading to @Mila_Quebec for 2 months to work together with Dhanya Sridhar, Simon Lacoste-Julien and their groups. Reach out if you want to talk about (causal) representation learning, OOD generalization, self-supervised learning, and compositionality—in LLMs and beyond.
2
3
24
@lucasmaes_
Lucas Maes
29 days
Essentially, the entire narrative of @ylecun over the past couple of years...
@wispem_wantex
wispem-wantex
30 days
It's pretty funny that we are converging on the reality that will upset the maximum amount of people: LLMs are pretty good. They're not transformative, and also not useless. There are some tasks where LLMs will save you a lot of time. And that's it
22
39
686
@lucasmaes_
Lucas Maes
29 days
Are reward functions a must for intelligence? What are the alternatives?
2
0
1
@m2saxon
Michael Saxon
30 days
The viral new "Definition of AGI" paper has fake citations which do not exist. And it specifically TELLS you to read them! Proof: different articles present at the specified journal/volume/page number, and their titles exist nowhere on any searchable repository.
103
220
2K
@frisbeemortel
Michael Rizvi-Martel
30 days
Is there such a thing as too many agents in multi-agent systems? It depends! 🧵 Our work reveals 3 distinct regimes where communication patterns differ dramatically. More on our findings below 👇 (1/7)
1
8
22
@worldmodel_26
World Modeling Workshop 2026
1 month
🚨 Interested in world models, time series, and finance? We’re thrilled to host @cbbruss (Capital One) at the World Modeling Workshop 2026. 💬 "Time and Money – Modeling the World of Consumer Finance” 📍 Mila, Montreal 🌐 https://t.co/inI2YV2Dsl 👉 https://t.co/HvVbVQKyZg
0
1
7
@Napoolar
Thomas Fel
1 month
🕳️🐇Into the Rabbit Hull – Part II Continuing our interpretation of DINOv2, the second part of our study concerns the geometry of concepts and the synthesis of our findings toward a new representational phenomenology: the Minkowski Representation Hypothesis
5
68
381
@Napoolar
Thomas Fel
1 month
🕳️🐇Into the Rabbit Hull – Part I (Part II tomorrow) An interpretability deep dive into DINOv2, one of vision’s most important foundation models. And today is Part I, buckle up, we're exploring some of its most charming features.
10
121
643
@marksibrahim
Mark Ibrahim
1 month
One can manipulate LLM rankings to put any model in the lead—only by modifying the single character separating demonstration examples. Learn more in our new paper https://t.co/D8CzSpPxMU w/ Jingtong Su, Jianyu Zhang, @karen_ullrich , and Léon Bottou. 1/3 🧵
1
3
11