sArehalli Profile Banner
Suhas Arehalli Profile
Suhas Arehalli

@sArehalli

Followers
275
Following
680
Media
11
Statuses
117

{Computational, Psycho}-linguist. Asst. Prof of CS @ Macalester College. he/him.

Joined June 2011
Don't wanna be here? Send us removal request.
@sArehalli
Suhas Arehalli
3 years
New work from Me, @tallinzen, and @linguistbrian to appear at CoNLL: https://t.co/u8aoqCPzCW Q: LM surprisal underestimates garden path effects. Is surprisal a bad theory of processing, or are LM estimates of surprisal just misaligned with human prediction? 🧵below:
Tweet card summary image
arxiv.org
Humans exhibit garden path effects: When reading sentences that are temporarily structurally ambiguous, they slow down when the structure is disambiguated in favor of the less preferred...
5
19
64
@grushaprasad
Grusha Prasad
1 year
I am at #ACL2024 and will be presenting joint work with @forrestdavis at the TeachingNLP workshop. The talk is on Aug 15 11:45 am in Lotus Suite 11. I would also love to grab coffee or meals with folks to chat about computational psycholing, SLACs, or life generally.
2
9
37
@sArehalli
Suhas Arehalli
1 year
Also, note to any @Macalester MSCS folks interested in a thesis topic... https://t.co/IlLHj01Kek
@tallinzen
Tal Linzen
1 year
These experiments are from a few years ago, so if you're looking for an undergrad thesis idea I would recommend replicating the experiments with contemporary models and telling me what you found.
0
0
0
@sArehalli
Suhas Arehalli
1 year
On a side note, happy to see all of this finally out in the world. This started as one of the first projects I worked on in grad school (my first QP!) and continued to develop up until my dissertation.
1
0
0
@sArehalli
Suhas Arehalli
1 year
This lead us to questions about (1) how much syntactic knowledge the LM task gives models (as much CCG supertagging as our training setup encouraged!) and (2) and whether almost parsing is a substitute for real parsing (converging with a lots of other work!).
1
0
0
@sArehalli
Suhas Arehalli
1 year
New(-ish) paper from Me and @tallinzen on whether (old) neural LMs capture agreement attraction effects! My highlight is, as Tal mentioned, that RNNs struggle to learn syntactic effects of attraction, and adding supervision on ccg supertagging ("almost parsing!") didn't help!
@tallinzen
Tal Linzen
1 year
Can LMs serve as cognitive models of human language processing? Humans make syntactic agreement errors ("the key to the cabinets are rusty"). @sArehalli and I tested if the errors documented in six human studies emerge in LMs. They... sometimes did.
1
1
8
@tallinzen
Tal Linzen
1 year
Can LMs serve as cognitive models of human language processing? Humans make syntactic agreement errors ("the key to the cabinets are rusty"). @sArehalli and I tested if the errors documented in six human studies emerge in LMs. They... sometimes did.
direct.mit.edu
Abstract. Languages are governed by syntactic constraints—structural rules that determine which sentences are grammatical in the language. In English, one such constraint is subject-verb agreement,...
2
7
51
@isoshin626
Shinnosuke Isono 磯野真之介
2 years
📣 New paper from @CognitionJourn on modeling of locality effects in sentence comprehension using CCG. The proposed model nicely accounts for crosslinguistic facts and fits well to reading times from an English corpus! Free version (until May 26): https://t.co/pq988ZIsCE
1
18
50
@UtkuTurkLing
utku turk
2 years
My paper (with Pavel Logacev) is out! https://t.co/Glljld37sm We tested case syncretism and its effects in Turkish agreement attraction. Unlike what is predicted by cue-based retrieval models, we (N=118) found no effect of case syncretism on the magnitude of the attraction.
Tweet card summary image
tandfonline.com
Speakers have been shown to find sentences with erroneous agreement acceptable under certain conditions. This so-called agreement attraction effect has also been found in genitive-possessive struct...
3
6
45
@lforlasagna
Jane 🐸
2 years
Excited to present some ongoing work at PLC48 tomorrow (Session 3A), where we test the types of morpho-phonological generalizations RNNs may form over limited training data 🤖 Joint work w/ @icoson and Paul Smolensky
0
1
16
@wtimkey8
Will Timkey
2 years
A bit late, but I'll be at #HSP2024 this year! Alongside some incredible coauthors (@psydock112 @sArehalli @grushaprasad @linguistbrian @tallinzen), I'll be presenting a poster about which eye tracking measures LM surprisal does and doesn't explain in garden path sentences.
0
2
14
@RTomMcCoy
Tom McCoy
2 years
I am hoping to hire a postdoc who would start in Fall 2024. If you are interested in the intersection of linguistics, cognitive science, and AI, I encourage you to apply! Please see this link for details: https://t.co/8Ds8X3OQf9
2
48
159
@sArehalli
Suhas Arehalli
2 years
Think you have a model that can explain the precise magnitude of the effects we find? The best part is that the 2000 person dataset is publically available, so you can see for yourself!
@tallinzen
Tal Linzen
2 years
Very pleased to see this article in print! In a study with 2000 subjects, we track how people read syntactically complex sentences, and find that word predictability estimated from language models does a poor job of explaining the human data.
0
4
17
@sArehalli
Suhas Arehalli
2 years
Do any (psycho)linguistics folks know of any summer research opportunities in the US available to non-citizens? I have an amazing student who wants to get some hands on experience at a larger institution, but most opportunities seem tied to NSF REU funding.
1
2
10
@amuuueller
Aaron Mueller
2 years
I (as well as @a_stadt, @weGotlieb, @LChoshen) will present the findings of the BabyLM Challenge on Dec. 7, 3:30 at CoNLL! Come see the high-level findings, as well as talks from the award-winning BabyLMs 👶
1
3
29
@sathvikn4
Sathvik
2 years
I'm really excited to be presenting this work at #EMNLP2023 and at @BlackboxNLP in Singapore this week! Come stop by my poster/message me if you're interested in the topic or more generally about anything at the compling x cogsci intersection.
@sathvikn4
Sathvik
2 years
Honored my paper with @psresnik was accepted to Findings of #EMNLP2023! Many psycholinguistics studies use LLMs to estimate the probability of words in context. But LLMs process statistically derived subword tokens, while human processing does not. Does the disconnect matter? 🧵
0
2
34
@najoungkim
Najoung Kim 🫠
2 years
🧙‍♀️ I'm hoping to recruit ~1 PhD student this cycle through @BULinguistics! Students who are broadly interested in meaning and computational models would be a good fit. I'll mention a few specific topics I've been working on & looking to expand below:
3
40
121
@RTomMcCoy
Tom McCoy
2 years
My colleagues and I are accepting applications for PhD students at Yale. If you think you would be a good fit, consider applying! Most of my research is about bridging the divide between linguistics and artificial intelligence (often connecting to CogSci & large language models)
6
62
245
@wtimkey8
Will Timkey
2 years
Language models are superhuman - How can we make them into more humanlike cognitive models? In a new #EMNLP2023 Findings paper w/ @tallinzen we show that LMs with limited memory retrieval capacity pattern like humans in agreement+semantic attraction https://t.co/mygTnAozRG (🧵)
3
15
96
@sathvikn4
Sathvik
2 years
Honored my paper with @psresnik was accepted to Findings of #EMNLP2023! Many psycholinguistics studies use LLMs to estimate the probability of words in context. But LLMs process statistically derived subword tokens, while human processing does not. Does the disconnect matter? 🧵
14
25
128
@emaliemcmahon
Emalie McMahon
2 years
In our new paper out today at @TrendsCognSci , @leylaisi and I argue that social interaction perception is a visual process--computed by the visual system and distinct from higher-level cognitive processes. https://t.co/g8twcLaYin
3
50
200