Suhas Arehalli
@sArehalli
Followers
275
Following
680
Media
11
Statuses
117
{Computational, Psycho}-linguist. Asst. Prof of CS @ Macalester College. he/him.
Joined June 2011
New work from Me, @tallinzen, and @linguistbrian to appear at CoNLL: https://t.co/u8aoqCPzCW Q: LM surprisal underestimates garden path effects. Is surprisal a bad theory of processing, or are LM estimates of surprisal just misaligned with human prediction? 🧵below:
arxiv.org
Humans exhibit garden path effects: When reading sentences that are temporarily structurally ambiguous, they slow down when the structure is disambiguated in favor of the less preferred...
5
19
64
I am at #ACL2024 and will be presenting joint work with @forrestdavis at the TeachingNLP workshop. The talk is on Aug 15 11:45 am in Lotus Suite 11. I would also love to grab coffee or meals with folks to chat about computational psycholing, SLACs, or life generally.
2
9
37
Also, note to any @Macalester MSCS folks interested in a thesis topic... https://t.co/IlLHj01Kek
These experiments are from a few years ago, so if you're looking for an undergrad thesis idea I would recommend replicating the experiments with contemporary models and telling me what you found.
0
0
0
On a side note, happy to see all of this finally out in the world. This started as one of the first projects I worked on in grad school (my first QP!) and continued to develop up until my dissertation.
1
0
0
This lead us to questions about (1) how much syntactic knowledge the LM task gives models (as much CCG supertagging as our training setup encouraged!) and (2) and whether almost parsing is a substitute for real parsing (converging with a lots of other work!).
1
0
0
New(-ish) paper from Me and @tallinzen on whether (old) neural LMs capture agreement attraction effects! My highlight is, as Tal mentioned, that RNNs struggle to learn syntactic effects of attraction, and adding supervision on ccg supertagging ("almost parsing!") didn't help!
Can LMs serve as cognitive models of human language processing? Humans make syntactic agreement errors ("the key to the cabinets are rusty"). @sArehalli and I tested if the errors documented in six human studies emerge in LMs. They... sometimes did.
1
1
8
Can LMs serve as cognitive models of human language processing? Humans make syntactic agreement errors ("the key to the cabinets are rusty"). @sArehalli and I tested if the errors documented in six human studies emerge in LMs. They... sometimes did.
direct.mit.edu
Abstract. Languages are governed by syntactic constraints—structural rules that determine which sentences are grammatical in the language. In English, one such constraint is subject-verb agreement,...
2
7
51
📣 New paper from @CognitionJourn on modeling of locality effects in sentence comprehension using CCG. The proposed model nicely accounts for crosslinguistic facts and fits well to reading times from an English corpus! Free version (until May 26): https://t.co/pq988ZIsCE
1
18
50
My paper (with Pavel Logacev) is out! https://t.co/Glljld37sm We tested case syncretism and its effects in Turkish agreement attraction. Unlike what is predicted by cue-based retrieval models, we (N=118) found no effect of case syncretism on the magnitude of the attraction.
tandfonline.com
Speakers have been shown to find sentences with erroneous agreement acceptable under certain conditions. This so-called agreement attraction effect has also been found in genitive-possessive struct...
3
6
45
Excited to present some ongoing work at PLC48 tomorrow (Session 3A), where we test the types of morpho-phonological generalizations RNNs may form over limited training data 🤖 Joint work w/ @icoson and Paul Smolensky
0
1
16
A bit late, but I'll be at #HSP2024 this year! Alongside some incredible coauthors (@psydock112 @sArehalli @grushaprasad @linguistbrian @tallinzen), I'll be presenting a poster about which eye tracking measures LM surprisal does and doesn't explain in garden path sentences.
0
2
14
I am hoping to hire a postdoc who would start in Fall 2024. If you are interested in the intersection of linguistics, cognitive science, and AI, I encourage you to apply! Please see this link for details: https://t.co/8Ds8X3OQf9
2
48
159
Think you have a model that can explain the precise magnitude of the effects we find? The best part is that the 2000 person dataset is publically available, so you can see for yourself!
Very pleased to see this article in print! In a study with 2000 subjects, we track how people read syntactically complex sentences, and find that word predictability estimated from language models does a poor job of explaining the human data.
0
4
17
Do any (psycho)linguistics folks know of any summer research opportunities in the US available to non-citizens? I have an amazing student who wants to get some hands on experience at a larger institution, but most opportunities seem tied to NSF REU funding.
1
2
10
I (as well as @a_stadt, @weGotlieb, @LChoshen) will present the findings of the BabyLM Challenge on Dec. 7, 3:30 at CoNLL! Come see the high-level findings, as well as talks from the award-winning BabyLMs 👶
1
3
29
I'm really excited to be presenting this work at #EMNLP2023 and at @BlackboxNLP in Singapore this week! Come stop by my poster/message me if you're interested in the topic or more generally about anything at the compling x cogsci intersection.
Honored my paper with @psresnik was accepted to Findings of #EMNLP2023! Many psycholinguistics studies use LLMs to estimate the probability of words in context. But LLMs process statistically derived subword tokens, while human processing does not. Does the disconnect matter? 🧵
0
2
34
🧙♀️ I'm hoping to recruit ~1 PhD student this cycle through @BULinguistics! Students who are broadly interested in meaning and computational models would be a good fit. I'll mention a few specific topics I've been working on & looking to expand below:
3
40
121
My colleagues and I are accepting applications for PhD students at Yale. If you think you would be a good fit, consider applying! Most of my research is about bridging the divide between linguistics and artificial intelligence (often connecting to CogSci & large language models)
6
62
245
Language models are superhuman - How can we make them into more humanlike cognitive models? In a new #EMNLP2023 Findings paper w/ @tallinzen we show that LMs with limited memory retrieval capacity pattern like humans in agreement+semantic attraction https://t.co/mygTnAozRG (🧵)
3
15
96
Honored my paper with @psresnik was accepted to Findings of #EMNLP2023! Many psycholinguistics studies use LLMs to estimate the probability of words in context. But LLMs process statistically derived subword tokens, while human processing does not. Does the disconnect matter? 🧵
14
25
128
In our new paper out today at @TrendsCognSci , @leylaisi and I argue that social interaction perception is a visual process--computed by the visual system and distinct from higher-level cognitive processes. https://t.co/g8twcLaYin
3
50
200