Maxime Peyrard Profile
Maxime Peyrard

@peyrardMax

Followers
386
Following
280
Media
6
Statuses
79

Junior Professor @CNRS (previously @EPFL, @TUDarmstadt) -- AI Interpretability, causality, and interaction flows between LLM, humans, and tools

Joined February 2016
Don't wanna be here? Send us removal request.
@peyrardMax
Maxime Peyrard
7 months
Our paper "Everything, Everywhere, All at Once: Is Mechanistic Interpretability Identifiable?" will be presented at #ICLR2025! It's also the first paper of my first PhD student — congrats @maximemeloux! 🎉 blog post: https://t.co/puTwN8dgYS A short thread 🧵
5
11
116
@krisgligoric
Kristina Gligorić
4 days
I'm recruiting multiple PhD students for Fall 2026 in Computer Science at @JHUCompSci 🍂 Apply to work on AI for social sciences/human behavior, social NLP, and LLMs for real-world applied domains you're passionate about! Learn more https://t.co/KbTJevMb8J & help spread the word!
14
154
651
@ReliableAI
RELAI
13 days
🚀 RELAI is live — a platform for building reliable AI agents 🔁 We complete the learning loop for agents: simulate → evaluate → optimize - Simulate with LLM personas, mocked MCP servers/tools and grounded synthetic data - Evaluate with code + LLM evaluators; turn human
9
28
54
@gaganbhatiaml
Gagan Bhatia
3 months
Breaking news! 🚨Our paper, "Date Fragments: A Hidden Bottleneck of Tokenization for Temporal Reasoning," has been accepted to the #EMNLP2025 main conference! Problem: LLMs struggle with time because tokenizers break dates into junk pieces (e.g., “20250312” → “202”, “503”, “12”)
2
1
17
@gaganbhatiaml
Gagan Bhatia
3 months
It's fascinating to see how LLMs learn to stitch the fragments back together! Pre-print: https://t.co/FIeBEcnrGR Thanks to my co-authors @peyrardMax & @andyweizhao ! See you at EMNLP!
0
1
4
@SaiboGeng
Saibo-Creator
4 months
🚀 Excited to share our latest work at ICML 2025 — zip2zip: Inference-Time Adaptive Vocabularies for Language Models via Token Compression! Sessions: 📅 Fri 18 Jul  - Tokenization Workshop 📅 Sat 19 Jul  - Workshop on Efficient Systems for Foundation Models (Oral 5/145)
1
6
24
@jkminder
Julian Minder
4 months
Causal Abstraction, the theory behind DAS, tests if a network realizes a given algorithm. We show (w/ @DenisSutte9310, T. Hofmann, @tpimentelms) that the theory collapses without the linear representation hypothesis—a problem we call the non-linear representation dilemma.
1
4
26
@DamienTeney
Damien Teney
4 months
Great case of underspecification: many solutions exist to the ERM learning objective. Key question: what's formally a "good model" (low MDL?) & how to make this the objective. Short of that, we could learn a variety of solutions to examine/select post-hoc:
Tweet card summary image
arxiv.org
Machine learning (ML) models are typically optimized for their accuracy on a given dataset. However, this predictive criterion rarely captures all desirable properties of a model, in particular...
@keyonV
Keyon Vafa
4 months
Can an AI model predict perfectly and still have a terrible world model? What would that even mean? Our new ICML paper formalizes these questions One result tells the story: A transformer trained on 10M solar systems nails planetary orbits. But it botches gravitational laws 🧵
2
3
34
@MartinJosifoski
Martin Josifoski
4 months
Scaling AI research agents is key to tackling some of the toughest challenges in the field. But what's required to scale effectively? It turns out that simply throwing more compute at the problem isn't enough. We break down an agent into four fundamental components that shape
5
29
155
@DamienTeney
Damien Teney
4 months
Coming up at ICML: 🤯Distribution shifts are still a huge challenge in ML. There's already a ton of algorithms to address specific conditions. So what if the challenge was just selecting the right algorithm for the right conditions?🤔🧵
3
38
364
@KordingLab
Kording Lab 🦖
5 months
Should we shut down the field of mechanistic interpretability?
7
7
31
@tizianopiccardi
Tiziano Piccardi
5 months
I'm so excited to join the CS department at Johns Hopkins University as an Assistant Professor! I'm looking for students interested in social computing, HCI, and AI—especially around designing better online systems in the age of LLMs. Come work with me! https://t.co/703Fxe3MdC
25
34
339
@krisgligoric
Kristina Gligorić
5 months
I'm excited to announce that I’ll be joining the Computer Science department at @JohnsHopkins as an Assistant Professor this Fall! I’ll be working on large language models, computational social science, and AI & society—and will be recruiting PhD students. Apply to work with me!
125
178
4K
@andyweizhao
Wei Zhao
6 months
Excited to announce the 1st Workshop on Large Language Models for Cross-Temporal Research at COLM 2025 on Oct 10 in Montreal 🇨🇦 LLMs are hindered in their understanding of time due to temporal biases, conflicting knowledge, and tokenization that fragments dates, leading to
1
6
16
@peyrardMax
Maxime Peyrard
7 months
What to do? 👉 Stricter validity criteria? 👉 Maybe interpretability is inherently underdetermined? and we can only get control and predictability but not "understanding" This is a fascinating topic, and we keep investigating. If you're interested, come and chat at ICLR!
0
0
2
@peyrardMax
Maxime Peyrard
7 months
Of course, LLMs are not small MLPs solving Boolean logic. It’s possible that identifiability becomes less of an issue in large models. But we argue the question deserves attention — how can we make interpretability rigorous if identifiability is so fragile?
1
0
1
@peyrardMax
Maxime Peyrard
7 months
We find a lot of identifiability issues: - Multiple explanatory algorithm exists - Even for one algorithm, many localizations in the network And findings are robust across scenarios.
1
0
1
@peyrardMax
Maxime Peyrard
7 months
In our work, we stress-test the identifiability of research programs of MI with small MLPs and simple boolean logic tasks. Why? It allows us to enumerate all possible explanations and see how many pass various MI testing criteria.
1
0
0
@peyrardMax
Maxime Peyrard
7 months
This brings us to identifiability. In statistics a property is identifiable if a unique value is compatible with the data. Identifiability matters because it is a prerequisite for doing statistical and causal inference. Interpretability is also an exercise in causal inference!
1
0
1
@peyrardMax
Maxime Peyrard
7 months
Mechanistic Interpretability aims to produce statements like: "Model M solves task T by doing X." To do so, many causal manipulations are performed to validate an explanation. But what if (many) other, incompatible explanations also pass the causal tests?
1
0
3