Kyle Mahowald Profile
Kyle Mahowald

@kmahowald

Followers
2K
Following
2K
Media
23
Statuses
642

UT Austin computational linguist. cognition, psycholinguistics, data, NLP, crosswords.

Austin, TX, USA
Joined March 2009
Don't wanna be here? Send us removal request.
@kmahowald
Kyle Mahowald
1 year
Imagine testing n-gram models or PCFGs for knowledge of grammar. You couldn’t possibly ask them “is this sentence grammatical?” because they can’t answer questions. You'd need some other method.
2
0
8
@kmahowald
Kyle Mahowald
7 days
Oh cool! Excited this LM + construction paper was SAC-Highlighted! Check it out to see how LM-derived measures of statistical affinity separate out constructions with similar words like "I was so happy I saw you" vs "It was so big it fell over".
@coryshain
Cory Shain
7 days
@jsrozner's paper (w/@LAWeissweiler + @kmahowald) was an SAC Highlight at #EMNLP25!
0
3
11
@SashaBoguraev
Sasha Boguraev
10 days
Very honored and excited to have won an Outstanding Paper Award at #emnlp2025! Special thanks to my co-authors @kmahowald and @ChrisGPotts, and excited to see where mechanistic approaches to LMs takes the field of modern computational linguistics!
@emnlpmeeting
EMNLP 2025
10 days
Outstanding paper (7/7): "Causal Interventions Reveal Shared Structure Across English Filler–Gap Constructions" by Sasha Boguraev, Christopher Potts, and Kyle Mahowald https://t.co/bf1281Tf38 8/n
0
3
25
@kmahowald
Kyle Mahowald
10 days
Delighted Sasha's work using mech interp to study complex syntax constructions won an Outstanding Paper Award at EMNLP! And delighted the ACL community continues to recognize unabashedly linguistic topics like filler-gaps, and the huge potential for LMs to inform such topics!
@SashaBoguraev
Sasha Boguraev
6 months
A key hypothesis in the history of linguistics is that different constructions share underlying structure. We take advantage of recent advances in mechanistic interpretability to test this hypothesis in Language Models. New work with @kmahowald and @ChrisGPotts! 🧵👇
1
21
90
@begusgasper
Gašper Beguš
10 months
@kmahowald Even Darwin's evolutionary theory was catalyzed by historical linguistics which introduced the model of ancestor language that no longer exist giving rise to several daughter languages decades before Darwin.
0
3
11
@rljfutrell
Richard Futrell
10 months
New paper, with @kmahowald , on LMs and linguistics that conveys our excitement about what the present moment means for linguistics. Some of the questions and our answers are summarized in the slide above. https://t.co/7DKmClgmeW
1
11
44
@kmahowald
Kyle Mahowald
10 months
Most important thing we do here is trace some history, reminding us a line can be traced to modern neural nets/AI from basic science research. Not in engineering but in linguistics and cognitive science. Much of that fundamental early work funded by the (currently paused) NSF.
@rljfutrell
Richard Futrell
10 months
Language Models learn a lot about language, much more than we expected, without much built-in structure. This matters for linguistics and opens up enormous opportunities. So should we just throw out linguistics? No! Quite the opposite: we need theory and structure.
1
7
65
@JulieKallini
Julie Kallini ✨
10 months
"Mission: Impossible" was featured in @QuantaMagazine! Big thank you to @benbenbrubaker for the wonderful article covering our work on impossible languages. Ben was so thoughtful and thorough in all our conversations, and it really shows in his writing!
@QuantaMagazine
Quanta Magazine
10 months
Large language models may not be so omnipotent after all. New research shows that LLMs, like humans, prefer to learn some linguistic patterns over others. @benbenbrubaker reports:
4
10
57
@QuantaMagazine
Quanta Magazine
10 months
Large language models may not be so omnipotent after all. New research shows that LLMs, like humans, prefer to learn some linguistic patterns over others. @benbenbrubaker reports:
Tweet card summary image
quantamagazine.org
Certain grammatical rules never appear in any known language. By constructing artificial languages that have these rules, linguists can use neural networks to explore how people learn.
2
37
95
@ChrisGPotts
Christopher Potts
10 months
I've posted the practice run of my LSA keynote. My core claim is that LLMs can be useful tools for doing close linguistic analysis. I illustrate with a detailed case study, drawing on corpora, targeted evaluations, and causal intervention-based analyses:
2
37
194
@vagheesh
Vagheesh Narasimhan
11 months
Thanks to all the lab members whose work made this possible! https://t.co/aGNNzzcsqJ
@UTexasResearch
UT Austin Research
11 months
We are thrilled to announce that three @UTAustin scholars have been selected as the 2024 recipients of the University Research Excellence Awards: • Research Excellence Career Award – Sumit Guha, professor, @UT_HistDept, @LiberalArtsUT • Creative Endeavor Award –
7
3
52
@kmahowald
Kyle Mahowald
1 year
Important early results on if speech-only philosophers can refer. Awaiting Tal’s future work on extending this paradigm to a multimodal @MMandelkern.
@tallinzen
Tal Linzen
1 year
The highlight of EMNLP for me, other than the beach, was @MMandelkern's slideless philosophy talk. People couldn't handle it.
3
1
34
@UT_Linguistics
UT Linguistics Dept
1 year
UT Linguistics is on a roll at #EMNLP2024! Congrats to Prof. Kyle Mahowald (@kmahowald) and Prof. Jessy Li (@jessyjli), and their respective coauthors, for winning Outstanding Paper Awards!
@emnlpmeeting
EMNLP 2025
1 year
Announcing the 20 **Outstanding Papers** for #EMNLP2024
0
4
30
@kmahowald
Kyle Mahowald
1 year
Grad school is the time to find a friend who will not only take the time to ask after a small typo you made but then rejects your explanation for the typo and posts the exchange on Twitter.
@danintheory
Dan Roberts
1 year
1
0
35
@kmahowald
Kyle Mahowald
1 year
Hmm I wasn’t exactly aiming for the period + triple comma, but in honor of @jessyjli let’s call it a new discourse marker.
3
0
6
@kmahowald
Kyle Mahowald
1 year
Much deserved for this paper .,,,and more broadly the cool and now extensive body of work by @jessyjli and co on computational QUD!
@jessyjli
Jessy Li
1 year
Thrilled that we won an 🥂Outstanding Paper Award at #EMNLP2024! Super validating for using computational methods to investigate discourse processing via QUDs. Super proud of my students @YatingWu96 @ritikarmangla, amazing team @AlexGDimakis @gregd_nlp
1
0
16
@jessyjli
Jessy Li
1 year
Thrilled that we won an 🥂Outstanding Paper Award at #EMNLP2024! Super validating for using computational methods to investigate discourse processing via QUDs. Super proud of my students @YatingWu96 @ritikarmangla, amazing team @AlexGDimakis @gregd_nlp
@YatingWu96
Yating Wu
2 years
LLMs can mimic human curiosity by generating open-ended inquisitive questions given some context, similar to how humans wonder when they read. But which ones are more important to be answered?🤔 We predict the salience of questions, substantially outperforming GPT-4.🌟 🧵1/5
14
9
130
@YatingWu96
Yating Wu
1 year
I'm thrilled to announce our paper "Which questions should I answer? Salience Prediction of Inquisitive Questions" has won an outstanding paper in EMNLP 2024🥳🥳. Thank you so much for my amazing co-authors and advisors!!! @ritikarmangla, @AlexGDimakis, @gregd_nlp, @jessyjli
7
9
103
@gregd_nlp
Greg Durrett
1 year
Two awards for UT Austin papers! Salience prediction of inquisitive questions by @YatingWu96 @ritikarmangla @AlexGDimakis me @jessyjli Learning AANNs and insights about grammatical generalization in pre-training by @kanishkamisra & @kmahowald Congrats to all the awardees!
@emnlpmeeting
EMNLP 2025
1 year
Announcing the 20 **Outstanding Papers** for #EMNLP2024
3
17
92