Kyle Mahowald
@kmahowald
Followers
2K
Following
2K
Media
23
Statuses
642
UT Austin computational linguist. cognition, psycholinguistics, data, NLP, crosswords.
Austin, TX, USA
Joined March 2009
Imagine testing n-gram models or PCFGs for knowledge of grammar. You couldn’t possibly ask them “is this sentence grammatical?” because they can’t answer questions. You'd need some other method.
2
0
8
Very honored and excited to have won an Outstanding Paper Award at #emnlp2025! Special thanks to my co-authors @kmahowald and @ChrisGPotts, and excited to see where mechanistic approaches to LMs takes the field of modern computational linguistics!
Outstanding paper (7/7): "Causal Interventions Reveal Shared Structure Across English Filler–Gap Constructions" by Sasha Boguraev, Christopher Potts, and Kyle Mahowald https://t.co/bf1281Tf38 8/n
0
3
25
Delighted Sasha's work using mech interp to study complex syntax constructions won an Outstanding Paper Award at EMNLP! And delighted the ACL community continues to recognize unabashedly linguistic topics like filler-gaps, and the huge potential for LMs to inform such topics!
A key hypothesis in the history of linguistics is that different constructions share underlying structure. We take advantage of recent advances in mechanistic interpretability to test this hypothesis in Language Models. New work with @kmahowald and @ChrisGPotts! 🧵👇
1
21
90
Can AI Models Show Us How People Learn? Incoming @UBCLinguistics Assistant Professor I Papadimitriou has something to say about this question https://t.co/Oe5Nt7ZHvL
quantamagazine.org
Certain grammatical rules never appear in any known language. By constructing artificial languages that have these rules, linguists can use neural networks to explore how people learn.
0
3
10
@kmahowald Even Darwin's evolutionary theory was catalyzed by historical linguistics which introduced the model of ancestor language that no longer exist giving rise to several daughter languages decades before Darwin.
0
3
11
New paper, with @kmahowald , on LMs and linguistics that conveys our excitement about what the present moment means for linguistics. Some of the questions and our answers are summarized in the slide above. https://t.co/7DKmClgmeW
1
11
44
Most important thing we do here is trace some history, reminding us a line can be traced to modern neural nets/AI from basic science research. Not in engineering but in linguistics and cognitive science. Much of that fundamental early work funded by the (currently paused) NSF.
Language Models learn a lot about language, much more than we expected, without much built-in structure. This matters for linguistics and opens up enormous opportunities. So should we just throw out linguistics? No! Quite the opposite: we need theory and structure.
1
7
65
"Mission: Impossible" was featured in @QuantaMagazine! Big thank you to @benbenbrubaker for the wonderful article covering our work on impossible languages. Ben was so thoughtful and thorough in all our conversations, and it really shows in his writing!
Large language models may not be so omnipotent after all. New research shows that LLMs, like humans, prefer to learn some linguistic patterns over others. @benbenbrubaker reports:
4
10
57
Large language models may not be so omnipotent after all. New research shows that LLMs, like humans, prefer to learn some linguistic patterns over others. @benbenbrubaker reports:
quantamagazine.org
Certain grammatical rules never appear in any known language. By constructing artificial languages that have these rules, linguists can use neural networks to explore how people learn.
2
37
95
I've posted the practice run of my LSA keynote. My core claim is that LLMs can be useful tools for doing close linguistic analysis. I illustrate with a detailed case study, drawing on corpora, targeted evaluations, and causal intervention-based analyses:
2
37
194
Thanks to all the lab members whose work made this possible! https://t.co/aGNNzzcsqJ
We are thrilled to announce that three @UTAustin scholars have been selected as the 2024 recipients of the University Research Excellence Awards: • Research Excellence Career Award – Sumit Guha, professor, @UT_HistDept, @LiberalArtsUT • Creative Endeavor Award –
7
3
52
Important early results on if speech-only philosophers can refer. Awaiting Tal’s future work on extending this paradigm to a multimodal @MMandelkern.
The highlight of EMNLP for me, other than the beach, was @MMandelkern's slideless philosophy talk. People couldn't handle it.
3
1
34
UT Linguistics is on a roll at #EMNLP2024! Congrats to Prof. Kyle Mahowald (@kmahowald) and Prof. Jessy Li (@jessyjli), and their respective coauthors, for winning Outstanding Paper Awards!
0
4
30
Grad school is the time to find a friend who will not only take the time to ask after a small typo you made but then rejects your explanation for the typo and posts the exchange on Twitter.
1
0
35
Hmm I wasn’t exactly aiming for the period + triple comma, but in honor of @jessyjli let’s call it a new discourse marker.
3
0
6
Much deserved for this paper .,,,and more broadly the cool and now extensive body of work by @jessyjli and co on computational QUD!
Thrilled that we won an 🥂Outstanding Paper Award at #EMNLP2024! Super validating for using computational methods to investigate discourse processing via QUDs. Super proud of my students @YatingWu96 @ritikarmangla, amazing team @AlexGDimakis @gregd_nlp
1
0
16
Thrilled that we won an 🥂Outstanding Paper Award at #EMNLP2024! Super validating for using computational methods to investigate discourse processing via QUDs. Super proud of my students @YatingWu96 @ritikarmangla, amazing team @AlexGDimakis @gregd_nlp
LLMs can mimic human curiosity by generating open-ended inquisitive questions given some context, similar to how humans wonder when they read. But which ones are more important to be answered?🤔 We predict the salience of questions, substantially outperforming GPT-4.🌟 🧵1/5
14
9
130
I'm thrilled to announce our paper "Which questions should I answer? Salience Prediction of Inquisitive Questions" has won an outstanding paper in EMNLP 2024🥳🥳. Thank you so much for my amazing co-authors and advisors!!! @ritikarmangla, @AlexGDimakis, @gregd_nlp, @jessyjli
7
9
103
Two awards for UT Austin papers! Salience prediction of inquisitive questions by @YatingWu96 @ritikarmangla @AlexGDimakis me @jessyjli Learning AANNs and insights about grammatical generalization in pre-training by @kanishkamisra & @kmahowald Congrats to all the awardees!
3
17
92