Hafez Ghaemi
@hafezghm
Followers
217
Following
38K
Media
5
Statuses
33
Ph.D. Student @Mila_Quebec and @UMontreal, ML Researcher
Montreal, QC
Joined August 2014
🚨 Preprint Alert 🚀 📄 seq-JEPA: Autoregressive Predictive Learning of Invariant-Equivariant World Models https://t.co/vJaFyoQZvV Can we simultaneously learn both transformation-invariant and transformation-equivariant representations with self-supervised learning (SSL)?
arxiv.org
Current self-supervised algorithms commonly rely on transformations such as data augmentation and masking to learn visual representations. This is achieved by enforcing invariance or equivariance...
3
13
37
CDS Sr. Research Scientist @cosmo_shirley and CDS Prof. @ylecun are keynoting the 2026 World Modeling Workshop (Feb 4–6 at Mila), with @NYU_Courant's @sherryyangML, and others. The workshop spans SSL, RL, robotics, LLMs & more. 📍 Montréal + online 🔗 https://t.co/94DOjVhgFa
4
17
86
📢Update for the World Modeling Workshop 2026! Unfortunately, Ricky Chen will not be able to give the tutorial on Diffusion/Flow Matching. However, we’re excited to announce that @GlenBerseth has kindly agreed to give a tutorial on VLA instead! 🙌 🌐 https://t.co/inI2YV3bhT
0
2
12
🚨 Interested in generative world models? We’re thrilled to host Stephen Spenser (@GoogleDeepMind) at the World Modeling Workshop 2026, where he’ll talk about the Genie series of models! 🌐 https://t.co/inI2YV2Dsl
3
17
116
🚨 We’re honored to host Prof. Jürgen Schmidhuber (The Swiss AI Lab & KAUST) at the World Modeling Workshop 2026! ✨ A pioneer of modern AI, Prof. @SchmidhuberAI has made influential contributions that shaped the field — we’re thrilled to welcome him. 🌐 https://t.co/inI2YV2Dsl
0
15
104
Excited to share that seq-JEPA has been accepted to NeurIPS 2025!
🚨 Preprint Alert 🚀 📄 seq-JEPA: Autoregressive Predictive Learning of Invariant-Equivariant World Models https://t.co/vJaFyoQZvV Can we simultaneously learn both transformation-invariant and transformation-equivariant representations with self-supervised learning (SSL)?
3
3
18
🌍 Excited to host Florian Bordes (@AIatMeta) at our World Model Workshop! He will present IntPhys 2, a benchmark for AI’s intuitive physics understanding. 🌐 https://t.co/WukFtNOfdQ 📍 Mila, Montreal, Canada 👉 https://t.co/tTC1eAFsL4
#AI #WorldModels #IntuitivePhysics
1
7
14
Super excited to be speaking alongside giants such as @ylecun and @Yoshua_Bengio at the world model workshop 🚀 at Mila! Hope to see many of you in the wondrous Montreal!
🚨Announcing the World Modeling Workshop 2026 🚨 📅 When: Feb 4–6, 2026 📍Where: Mila (Montréal) + Online (free) 💡 What: Keynotes, Methods Deep Dive, and Tutorials 🌐 https://t.co/WukFtNON3o ✉️ worldmodel.mila@gmail.com 🧵 Details below:
2
22
249
We are organizing a hands-on world modeling workshop to discuss, share ideas, and argue about the next generation of world models! An impressive lineup including @ylecun @Yoshua_Bengio @sherryyangML @cosmo_shirley and many more! Submit your work, register, and see you in Feb!
🚨Announcing the World Modeling Workshop 2026 🚨 📅 When: Feb 4–6, 2026 📍Where: Mila (Montréal) + Online (free) 💡 What: Keynotes, Methods Deep Dive, and Tutorials 🌐 https://t.co/WukFtNON3o ✉️ worldmodel.mila@gmail.com 🧵 Details below:
3
19
97
We are organizing a workshop on World Models at Mila next Feb! If you want to understand how to learning and us world models come join the discussion.
🚨Announcing the World Modeling Workshop 2026 🚨 📅 When: Feb 4–6, 2026 📍Where: Mila (Montréal) + Online (free) 💡 What: Keynotes, Methods Deep Dive, and Tutorials 🌐 https://t.co/WukFtNON3o ✉️ worldmodel.mila@gmail.com 🧵 Details below:
0
8
63
Join us to push the field forward! Please spread the word📣📣
🚨Announcing the World Modeling Workshop 2026 🚨 📅 When: Feb 4–6, 2026 📍Where: Mila (Montréal) + Online (free) 💡 What: Keynotes, Methods Deep Dive, and Tutorials 🌐 https://t.co/WukFtNON3o ✉️ worldmodel.mila@gmail.com 🧵 Details below:
0
3
6
Excited to announce this awesome workshop we are organizing at Mila World models touch so many fields from robotics/videos to LLMs and AI4Science: there will be something for everyone! We already have several amazing speakers confirmed, with more to be announced soon 🤩
🚨Announcing the World Modeling Workshop 2026 🚨 📅 When: Feb 4–6, 2026 📍Where: Mila (Montréal) + Online (free) 💡 What: Keynotes, Methods Deep Dive, and Tutorials 🌐 https://t.co/WukFtNON3o ✉️ worldmodel.mila@gmail.com 🧵 Details below:
0
5
22
🚨 New World Models Workshop! Thrilled to be hosting this event @Mila_Quebec! I’m convinced this is exactly what the field needs right now to push our community forward. Please RT + apply if you’re into world models! 🚀
🚨Announcing the World Modeling Workshop 2026 🚨 📅 When: Feb 4–6, 2026 📍Where: Mila (Montréal) + Online (free) 💡 What: Keynotes, Methods Deep Dive, and Tutorials 🌐 https://t.co/WukFtNON3o ✉️ worldmodel.mila@gmail.com 🧵 Details below:
0
3
10
Excited to be one of the organizers of this workshop! If you work on world modeling, join us to discuss the future of the field. Stay tuned for more speakers!
🚨Announcing the World Modeling Workshop 2026 🚨 📅 When: Feb 4–6, 2026 📍Where: Mila (Montréal) + Online (free) 💡 What: Keynotes, Methods Deep Dive, and Tutorials 🌐 https://t.co/WukFtNON3o ✉️ worldmodel.mila@gmail.com 🧵 Details below:
0
2
7
🚨Announcing the World Modeling Workshop 2026 🚨 📅 When: Feb 4–6, 2026 📍Where: Mila (Montréal) + Online (free) 💡 What: Keynotes, Methods Deep Dive, and Tutorials 🌐 https://t.co/WukFtNON3o ✉️ worldmodel.mila@gmail.com 🧵 Details below:
6
58
242
Huge thanks to my supervisors and co-authors @NeuralEnsemble and @ShahabBakht! Check out the full paper here: 📄 https://t.co/vJaFyoQZvV 💻 Code coming soon! 📬 DM me if you’d like to chat or discuss the paper! (7/7)
arxiv.org
Current self-supervised algorithms commonly rely on transformations such as data augmentation and masking to learn visual representations. This is achieved by enforcing invariance or equivariance...
0
0
3
Interestingly, seq-JEPA shows path integration capabilities – an important research problem in neuroscience. By observing a sequence of views and their corresponding actions, it can integrate the path connecting the initial view to the final view. (6/7)
1
0
3
On 3D Invariant-Equivariant Benchmark (3DIEBench) where each object view has a different rotation angle, seq-JEPA achieves top performance on both invariance-related object categorization and equivariance-related rotation prediction without sacrificing one for the other, as
1
0
2
Seq-JEPA learns invariant-equivariant representations for a variety of tasks that contain sequential observations and transformations; for example, it can learn semantic image representations by seeing a sequence of small image patches across simulated eye movements with no
1
0
3
🧠Humans learn to recognize new objects by moving around them, manipulating them, and probing them via eye movements. Different views of a novel object are generated through actions (manipulations, eye movements, etc.) that are then integrated to form new concepts in the brain.
1
0
3
Current SSL methods often face a trade-off: optimizing for transformational invariance in representational space (useful in high-level tasks, such as classification) often reduces equivariance (needed for fine-grained downstream tasks related to details like object rotation,
1
0
2