Manoj Kumar @[email protected]
@manojneuro
Followers
185
Following
440
Media
12
Statuses
292
I'm interested in how the brain uses context and predictions to shape our understanding and memories. @Princetonneuro https://t.co/j13gcfGGjW
Joined March 2017
Excited to share our work using GPT-2 to test theories of human event segmentation, w/@GoldsteinYAriel @s_michelmann Jeff Zacks @HassonLab @ptoncompmemlab
https://t.co/SVqZbOHjYe via @OSFramework đź§µ 1/N
osf.io
Event segmentation theory posits that people segment continuous experience into discrete events, and that event boundaries occur when there are large transient increases in prediction error. Here, we...
2
25
84
So happy that our paper on event segmentation in large language models is now out in Behavior Research Methods!
Excited to share our new preprint https://t.co/c8kgodk5hS with @mtoneva1, @ptoncompmemlab, and @manojneuro), in which we ask if GPT-3 (a large language model) can segment narratives into meaningful events similarly to humans. We use an unconventional approach: ⬇️
0
5
15
🚨🚨 New paper alert! "Disruption to left inferior frontal cortex modulates semantic prediction effects in reading and subsequent memory: Evidence from simultaneous TMS-EEG" in press at Psychophysiology @TheRealSPR -- headed by the wonderful @jack_silcox 🧵
1
4
10
Congratulations Kara!!! Very well-deserved! @CABlabUIUC
Former SPR Early Career awardee (2006) and past president (2017-2019), Kara Federmeier, was named a 2022 Fellow of the American Association for the Advancement of Science. Kara is a leader in helping us understand how the brain predicts and constructs meaning in language! #AAAS
0
1
7
My amazing postdoc mentor was elected to AAAS!!! @CABlabUIUC @BeckmanInst
3
3
13
Excited to share our new preprint https://t.co/c8kgodk5hS with @mtoneva1, @ptoncompmemlab, and @manojneuro), in which we ask if GPT-3 (a large language model) can segment narratives into meaningful events similarly to humans. We use an unconventional approach: ⬇️
arxiv.org
Humans perceive discrete events such as "restaurant visits" and "train rides" in their continuous experience. One important prerequisite for studying human event perception is the ability of...
2
30
94
Excited to share that the deadline for our Neuropsychologia Issue on Cognitive Neuroscience using Naturalistic Paradigms will be extended to June 1st, 2023. We've received some fascinating submissions so far and look forward to seeing more! Pls RT! Info:
3
31
65
My mastodon account: Still figuring out how to make it all work in the other side.
mastodon.online
0 Posts, 10 Following, 32 Followers · I am interested in how predictions in the brain shape perception and memory. Currently working in the a computational memory @ptoncompmemlab@mastodononline at...
0
0
0
Check out our (led by Dr. Evan Center) newly published paper in JOV: https://t.co/luXI6Ba4kH Title: Typical viewpoints of objects are better detected than atypical ones
0
1
8
Are you interested in helping to level the playing field for PhD applicants in psychology? Consider applying to this (by Oct 12): https://t.co/2uXD8Io6ip looks like a great program @ErikNook
0
2
5
We're now accepting applications for the @PrincetonNeuro 2023 Summer Internship Program! We're @NSF funded and we offer a generous stipend, housing, meals, and travel (including to @SfNtweets)! Learn more here and apply by 2/1/23! https://t.co/6xuDW5U1jC
2
114
175
@PrincetonNeuro will be hosting a virtual open house on Fri, Oct 21 from 12-1 pm ET. If you're interested in our grad program, please register at the following link or by scanning the QR code below. https://t.co/0QG5Br8lUL
0
10
22
More generally, these results show how language models from AI can be used as a tool to gain insight into how humans represent rich, structured narratives 13/N
0
0
1
Bayesian surprise, which measures changes in the content and confidence of participants’ predictions, does better than surprisal, which only tracks the (predicted) probability of each individual word in the story 12/N
1
0
2
These results provide support for Event Segmentation theory, and they also show that not all measures of PE are created equal: 11/N
1
0
1
We found that transient (i.e., normalized-in-time) Bayesian surprise substantially outperformed the other measures at predicting event boundaries 10/N
1
0
1
To compare the different models, we computed the PE measures described above based on three text stories, and used a regression model to relate these PE measures to human-annotated boundaries for those stories 9/N
1
0
0
Lastly, we can look at whether it’s important to normalize these measures in time: Are event boundaries more likely to occur when a PE occurs after a period of low PE vs. high PE? 8/N
1
0
0
We can also look at other measures related to PE like entropy (uncertainty at a particular time point) and change in stimulus features (operationalized as the change in GPT-2’s hidden representation) 7/N
1
0
0
But another way to operationalize PE is Bayesian surprise: the change in the entire predictive distribution over words from t-1 to t, measured using KL divergence 6/N
1
0
1
Importantly, we can compare different “flavors” of PE to see how strongly they relate to event boundaries. One commonly-used PE measure is surprisal: the predicted probability (at time t-1) of the word shown at time t … 5/N
1
0
0