
Brown NLP
@Brown_NLP
Followers
3K
Following
99
Media
32
Statuses
153
Language Understanding and Representation Lab at Brown University. PI: Ellie Pavlick.
Providence, RI
Joined October 2020
RT @ruochenz_: 🤔Ever wonder why LLMs give inconsistent answers in different languages?. In our paper, we identify two failure points in the….
0
15
0
David Byrne won't be @NeurIPSConf, but we will be!
Can we find circuits directly from a model’s params? At Neurips I’m presenting work on understanding how attn heads in LMs communicate by analyzing their weights. We find a lot of interesting things, like a 3D subspace that controls which index in a list to attend to!
1
1
12
RT @ARTartaglini: 🚨 New paper at @NeurIPSConf w/ @Michael_Lepori! Most work on interpreting vision models focuses on concrete visual featu….
0
37
0
RT @ruochenz_: 🤔How do multilingual LLMs encode structural similarities across languages?.🌟We find that LLMs use identical circuits when la….
0
34
0
RT @apoorvkh: Wondering how long it takes to train a 1B-param LM from scratch on your GPUs? 🧵. See our paper to learn about the current sta….
0
98
0
RT @neuroexplicit: Now hiring: Twelve (!) PhD students to start in fall 2025, for research on combining neural and symbolic/interpretable m….
0
11
0
RT @surajk610: How robust are in-context algorithms? In new work with @michael_lepori, @jack_merullo, and @brown_nlp, we explore why in-con….
0
12
0
RT @geomblog: Job Opportunity! At @BrownUniversity @Brown_DSI I direct the (new) Center for Tech Responsibility (CNTR). I'm looking to hir….
0
28
0
RT @alexisjross: Excited to share that our work on in-context teaching will appear at #ACL2024! 🇹ðŸ‡.
0
9
0
RT @tianyunnn: Excited to share our work mOthello: When Do Cross-Lingual Representation Alignment and Cross-Lingual Transfer Emerge in Mult….
0
4
0
RT @apoorvkh: Calling all academic AI researchers! 🚨.We are conducting a survey on compute resources. We want to help the community better….
0
31
0
RT @EnyanZhang: It’s often hard to predict/control what neural networks learn during training—can we encourage them to adopt the solutions….
0
1
0
RT @jack_merullo_: Our #ICLR2024 paper was accepted as a spotlight: We look at whether language models reuse attention heads for functional….
0
19
0
RT @Michael_Lepori: Compositional generalization is a major challenge for neural networks. In a #NeurIPS2023 spotlight paper with @tserre a….
0
24
0
RT @tianyunnn: We study whether decision making via sequence modeling predicts action using internal state representations or surface stati….
0
4
0
RT @apoorvkh: A few recent approaches solve CV tasks by generating and executing programs with neural and symbolic modules. In our new pap….
0
7
0
RT @Michael_Lepori: Domain experts often have intuitions about the algorithms that transformers may use to solve tasks, but do models actua….
0
17
0