Michael Lepori Profile
Michael Lepori

@Michael_Lepori

Followers
440
Following
1K
Media
24
Statuses
103

PhD student at Brown interested in deep learning + cog sci, but more interested in playing guitar. @NSF GRFP Fellow, @GoogleDeepMind Intern. He/Him.

Providence, RI
Joined January 2021
Don't wanna be here? Send us removal request.
@Michael_Lepori
Michael Lepori
28 days
How do VLMs balance visual information presented in-context with linguistic priors encoded in-weights? In this project, @MichalGolov and @WilliamRudmanjr find out! My favorite result: you can find a vector that shifts attention to image tokens and changes the VLM's response!.
@WilliamRudmanjr
William Rudman
1 month
When vision-language models answer questions, are they truly analyzing the image or relying on memorized facts? We introduce Pixels vs. Priors (PvP), a method to control whether VLMs respond based on input pixels or world knowledge priors. [1/5]
Tweet media one
0
2
9
@Michael_Lepori
Michael Lepori
1 month
RT @michaelwhanna: @mntssys and I are excited to announce circuit-tracer, a library that makes circuit-finding simple!. Just type in a sent….
0
46
0
@Michael_Lepori
Michael Lepori
1 month
RT @GretaTuckute: What are the organizing dimensions of language processing?. We show that voxel responses are organized along 2 main axes:….
0
39
0
@Michael_Lepori
Michael Lepori
1 month
I had a great time helping out on this project with @_jennhu and @meanwhileina! If you're interested in the intersection of interpretability and cogsci, check it out!.
@_jennhu
Jennifer Hu
1 month
Excited to share a new preprint w/ @Michael_Lepori & @meanwhileina! . A dominant approach in AI/cogsci uses *outputs* from AI models (eg logprobs) to predict human behavior. But how does model *processing* (across a forward pass) relate to human real-time processing? 👇 (1/12)
Tweet media one
0
1
24
@Michael_Lepori
Michael Lepori
2 months
RT @lambdaviking: Excited to announce I'll be starting as an assistant professor at @TTIC_Connect for fall 2026!. In the meantime, I'll be….
0
24
0
@Michael_Lepori
Michael Lepori
2 months
RT @FPiedrahitaV: Excited to present our paper at #NAACL this Friday, May 2, at 10am in Ballroom A! If you're inter….
0
3
0
@Michael_Lepori
Michael Lepori
2 months
RT @WorldModelsICML: Announcing the ICML 2025 workshop on Assessing World Models: Methods and Metrics for Evaluating Understanding 🌍. Submi….
0
4
0
@Michael_Lepori
Michael Lepori
2 months
Also, I'll be in ABQ all week - feel free to reach out if you want to chat!.
0
0
1
@Michael_Lepori
Michael Lepori
2 months
I'm very excited that this work was accepted for an oral presentation @naacl! Come by at 10:45 on Thursday to hear how we can use mechanistic interpretability to better understand how LLMs incorporate context when answering questions.
@Michael_Lepori
Michael Lepori
9 months
The ability to properly contextualize is a core competency of LLMs, yet even the best models sometimes struggle. In a new preprint, we use #MechanisticInterpretability techniques to propose an explanation for contextualization errors: the LLM Race Conditions Hypothesis. [1/9]
Tweet media one
1
3
27
@Michael_Lepori
Michael Lepori
2 months
RT @surajk610: Excited to be at #ICLR2025 in a few days to present this work with @Michael_Lepori! Interested in chatting about training dy….
0
3
0
@Michael_Lepori
Michael Lepori
2 months
I will be at #ICLR2025 in a few days to present this work with @surajk610! Feel free to DM me if you want to chat about mechinterp, cognitive science, or anything else!.
@surajk610
Suraj Anand
1 year
How robust are in-context algorithms? In new work with @michael_lepori, @jack_merullo, and @brown_nlp, we explore why in-context learning disappears over training and fails on rare and unseen tokens. We also introduce a training intervention that fixes these failures.
Tweet media one
1
3
44
@Michael_Lepori
Michael Lepori
3 months
RT @keyonV: AI models appear to mimic the real world. But how can we tell if they truly understand it? . Excited to announce the ICML 2025….
0
11
0
@Michael_Lepori
Michael Lepori
4 months
RT @Napoolar: Train your vision SAE on Monday, then again on Tuesday, and you'll find only about 30% of the learned concepts match. ⚓ We p….
0
79
0
@Michael_Lepori
Michael Lepori
4 months
RT @RMBattleday: 📢A few days left to submit an abstract for our conference on the Mathematics of Neuroscience and AI in sunny Split (May 27….
0
2
0
@Michael_Lepori
Michael Lepori
4 months
RT @RMBattleday: 📢Abstract submissions extended to 16th March AOE for our Annual Conference on the Mathematics of Neuroscience and AI (May….
0
9
0
@Michael_Lepori
Michael Lepori
7 months
RT @jack_merullo_: Can we find circuits directly from a model’s params? At Neurips I’m presenting work on understanding how attn heads in L….
0
10
0
@Michael_Lepori
Michael Lepori
7 months
RT @SimonsInstitute: Watch this special debate live tomorrow at 10:30 a.m. PT — part of our workshop on Unknown Futures of Generalization.….
0
10
0
@Michael_Lepori
Michael Lepori
7 months
I'll be at @NeurIPSConf from Monday until Friday next week to present this work. If you'd like to chat about mechinterp, cognitive science, or anything else, feel free to DM me!.
@ARTartaglini
Alexa R. Tartaglini
7 months
🚨 New paper at @NeurIPSConf w/ @Michael_Lepori! Most work on interpreting vision models focuses on concrete visual features (edges, objects). But how do models represent abstract visual relations between objects? We adapt NLP interpretability techniques for ViTs to find out! 🔍
Tweet media one
0
0
21
@Michael_Lepori
Michael Lepori
7 months
RT @RMBattleday: 📢📢 Just two days left until our Summit on Open Problems for AI in Boston: Algorithms (4th Dec), E….
0
3
0
@Michael_Lepori
Michael Lepori
7 months
Even ducklings🐣can represent abstract visual relations. Can your favorite ViT? In our new @NeurIPSConf paper, we use mechanistic interpretability to find out!.
@ARTartaglini
Alexa R. Tartaglini
7 months
🚨 New paper at @NeurIPSConf w/ @Michael_Lepori! Most work on interpreting vision models focuses on concrete visual features (edges, objects). But how do models represent abstract visual relations between objects? We adapt NLP interpretability techniques for ViTs to find out! 🔍
Tweet media one
0
4
23