Brown NLP
@Brown_NLP
Followers
3K
Following
104
Media
32
Statuses
159
Language Understanding and Representation Lab at Brown University. PI: Ellie Pavlick.
Providence, RI
Joined October 2020
What does your favorite language model know about the real world? 🌎 Can it distinguish between possible and impossible events? We find that LM representations not only encode these distinctions, but that they predict human judgments of event plausibility!
1
7
22
In our new paper, we ask whether language models solve compositional tasks using compositional mechanisms. 🧵
4
27
183
Our “academic pre-training” paper was accepted to COLM! I’ll be presenting at the Tuesday (11 AM) poster session!
Wondering how long it takes to train a 1B-param LM from scratch on your GPUs? 🧵 See our paper to learn about the current state of academic compute and how to efficiently train models! Use our code to test your own models/GPUs! https://t.co/hvrjwlApN8
https://t.co/1JnEe2CCLr
0
3
19
Brown’s Department of Cognitive & Psychological Sciences is hiring a tenure-track Assistant Professor, working in the area of AI and the Mind (start July 1, 2026). Apply by Nov 8, 2025 👉 https://t.co/clod1iz6xu
#AI #CognitiveScience #AcademicJobs #BrownUniversity
1
17
25
🥳 our recent work is accepted to #EMNLP2025 main conference! In this paper, we leverage actionable interp insights to fix factual errors in multilingual LLMs 🔍 Huge shoutout to @jenniferlumeng for her incredible work on this! She's applying for PhD this cycle and you should
🤔Ever wonder why LLMs give inconsistent answers in different languages? In our paper, we identify two failure points in the multilingual factual recall process and propose fixes that guide LLMs to the "right path." This can boost performance by 35% in the weakest language! 📈
8
7
72
Check out our new paper: “How Do Vision-Language Models Process Conflicting Information Across Modalities?”! Vision-language models often struggle with conflicting inputs - we show how their internal representations and key attention heads reveal when and how this happens, and
3
7
32
🤔Ever wonder why LLMs give inconsistent answers in different languages? In our paper, we identify two failure points in the multilingual factual recall process and propose fixes that guide LLMs to the "right path." This can boost performance by 35% in the weakest language! 📈
2
15
74
David Byrne won't be @NeurIPSConf, but we will be!
Can we find circuits directly from a model’s params? At Neurips I’m presenting work on understanding how attn heads in LMs communicate by analyzing their weights. We find a lot of interesting things, like a 3D subspace that controls which index in a list to attend to!
2
1
14
🚨 New paper at @NeurIPSConf w/ @Michael_Lepori! Most work on interpreting vision models focuses on concrete visual features (edges, objects). But how do models represent abstract visual relations between objects? We adapt NLP interpretability techniques for ViTs to find out! 🔍
2
38
260
Thanks to Nature News for some nice coverage of our work! https://t.co/SsNJXcn4Fy
Wondering how long it takes to train a 1B-param LM from scratch on your GPUs? 🧵 See our paper to learn about the current state of academic compute and how to efficiently train models! Use our code to test your own models/GPUs! https://t.co/hvrjwlApN8
https://t.co/1JnEe2CCLr
0
3
22
🤔How do multilingual LLMs encode structural similarities across languages? 🌟We find that LLMs use identical circuits when languages share the same morphosyntactic processes. However, they involve specialized components to handle tasks if contain specific linguistic features⤵️
2
36
157
Wondering how long it takes to train a 1B-param LM from scratch on your GPUs? 🧵 See our paper to learn about the current state of academic compute and how to efficiently train models! Use our code to test your own models/GPUs! https://t.co/hvrjwlApN8
https://t.co/1JnEe2CCLr
github.com
$100K or 100 Days: Trade-offs when Pre-Training with Academic Resources - apoorvkh/academic-pretraining
10
97
657
LUNAR Lab is looking for a postdoc to work on mechanistic interpretability + AI safety/trustworthiness! The position is for two years, with a possibility of extending. If interested, submit a CV here. Applicants will be considered on a rolling basis.
0
11
22
How robust are in-context algorithms? In new work with @michael_lepori, @jack_merullo, and @brown_nlp, we explore why in-context learning disappears over training and fails on rare and unseen tokens. We also introduce a training intervention that fixes these failures.
2
13
88
Job Opportunity! At @BrownUniversity @Brown_DSI I direct the (new) Center for Tech Responsibility (CNTR). I'm looking to hire a Program Manager who would work with me to help operationalize the vision for the Center. If you're interested, apply!
0
27
47
Excited to share that our work on in-context teaching will appear at #ACL2024! 🇹🇭
Good teachers *adapt* to student beliefs & misconceptions: Can LLM teachers? In new work w/ @jacobandreas, we introduce 1) the AdapT 👩🏫 (Adaptive Teaching) evaluation framework & 2) AToM ⚛️ (Adaptive Teaching tOwards Misconceptions), a new probabilistic teaching method. (1/n)
0
9
91
Can Large Language Models Understand ‘Meaning’? https://t.co/F9r0weOUOe via @YouTube
0
3
26
Excited to share our work mOthello: When Do Cross-Lingual Representation Alignment and Cross-Lingual Transfer Emerge in Multilingual Models? accepted at #NAACL2024 with @EthaHua @Brown_NLP Paper: https://t.co/a087WNBurK Code: https://t.co/a6wmgbg5ov Webpage:
github.com
Code for NAACL 2024 Findings paper "mOthello: When Do Cross-Lingual Representation Alignment and Cross-Lingual Transfer Emerge in Multilingual Models?" - ethahtz/multilingual_othello
1
4
33
Calling all academic AI researchers! 🚨 We are conducting a survey on compute resources. We want to help the community better understand our capabilities+needs. We hope that this will help us all advocate for the resources we need! Please contribute at:
docs.google.com
Target Audience: Academic researchers directly involved in AI fields (ML, NLP, CV, etc). Estimated time to compute this survey is 5–10 minutes. Contact: Apoorv Khandelwal, Nihal Nayak, Tian Yun, Jack...
0
32
54