Manoj Kumar @manojneuro@mastodon.online Profile
Manoj Kumar @[email protected]

@manojneuro

Followers
185
Following
440
Media
12
Statuses
292

I'm interested in how the brain uses context and predictions to shape our understanding and memories. @Princetonneuro https://t.co/j13gcfGGjW

Joined March 2017
Don't wanna be here? Send us removal request.
@s_michelmann
Sebastian Michelmann
1 year
So happy that our paper on event segmentation in large language models is now out in Behavior Research Methods!
@s_michelmann
Sebastian Michelmann
3 years
Excited to share our new preprint https://t.co/c8kgodk5hS with @mtoneva1, @ptoncompmemlab, and @manojneuro), in which we ask if GPT-3 (a large language model) can segment narratives into meaningful events similarly to humans. We use an unconventional approach: ⬇️
0
5
15
@LamaLabUtah
Language and Memory Aging Lab
3 years
🚨🚨 New paper alert! "Disruption to left inferior frontal cortex modulates semantic prediction effects in reading and subsequent memory: Evidence from simultaneous TMS-EEG" in press at Psychophysiology @TheRealSPR -- headed by the wonderful @jack_silcox 🧵
1
4
10
@NeureauxHeather
Heather Lucas
3 years
Congratulations Kara!!! Very well-deserved! @CABlabUIUC
@TheRealSPR
TheRealSPR
3 years
Former SPR Early Career awardee (2006) and past president (2017-2019), Kara Federmeier, was named a 2022 Fellow of the American Association for the Advancement of Science. Kara is a leader in helping us understand how the brain predicts and constructs meaning in language! #AAAS
0
1
7
@mtroyer_
Melissa Troyer
3 years
My amazing postdoc mentor was elected to AAAS!!! @CABlabUIUC @BeckmanInst
3
3
13
@s_michelmann
Sebastian Michelmann
3 years
Excited to share our new preprint https://t.co/c8kgodk5hS with @mtoneva1, @ptoncompmemlab, and @manojneuro), in which we ask if GPT-3 (a large language model) can segment narratives into meaningful events similarly to humans. We use an unconventional approach: ⬇️
Tweet card summary image
arxiv.org
Humans perceive discrete events such as "restaurant visits" and "train rides" in their continuous experience. One important prerequisite for studying human event perception is the ability of...
2
30
94
@IM_Inman
Cory Inman
3 years
Excited to share that the deadline for our Neuropsychologia Issue on Cognitive Neuroscience using Naturalistic Paradigms will be extended to June 1st, 2023. We've received some fascinating submissions so far and look forward to seeing more! Pls RT! Info:
3
31
65
@BecklabIllinois
Attention and Perception Lab
3 years
Check out our (led by Dr. Evan Center) newly published paper in JOV: https://t.co/luXI6Ba4kH Title: Typical viewpoints of objects are better detected than atypical ones
0
1
8
@LLEmberson
Lauren Emberson
3 years
Are you interested in helping to level the playing field for PhD applicants in psychology? Consider applying to this (by Oct 12): https://t.co/2uXD8Io6ip looks like a great program @ErikNook
0
2
5
@edcclayton
Ed Clayton
3 years
We're now accepting applications for the @PrincetonNeuro 2023 Summer Internship Program! We're @NSF funded and we offer a generous stipend, housing, meals, and travel (including to @SfNtweets)! Learn more here and apply by 2/1/23! https://t.co/6xuDW5U1jC
2
114
175
@edcclayton
Ed Clayton
3 years
@PrincetonNeuro will be hosting a virtual open house on Fri, Oct 21 from 12-1 pm ET. If you're interested in our grad program, please register at the following link or by scanning the QR code below. https://t.co/0QG5Br8lUL
0
10
22
@manojneuro
Manoj Kumar @[email protected]
3 years
More generally, these results show how language models from AI can be used as a tool to gain insight into how humans represent rich, structured narratives 13/N
0
0
1
@manojneuro
Manoj Kumar @[email protected]
3 years
Bayesian surprise, which measures changes in the content and confidence of participants’ predictions, does better than surprisal, which only tracks the (predicted) probability of each individual word in the story 12/N
1
0
2
@manojneuro
Manoj Kumar @[email protected]
3 years
These results provide support for Event Segmentation theory, and they also show that not all measures of PE are created equal: 11/N
1
0
1
@manojneuro
Manoj Kumar @[email protected]
3 years
We found that transient (i.e., normalized-in-time) Bayesian surprise substantially outperformed the other measures at predicting event boundaries 10/N
1
0
1
@manojneuro
Manoj Kumar @[email protected]
3 years
To compare the different models, we computed the PE measures described above based on three text stories, and used a regression model to relate these PE measures to human-annotated boundaries for those stories 9/N
1
0
0
@manojneuro
Manoj Kumar @[email protected]
3 years
Lastly, we can look at whether it’s important to normalize these measures in time: Are event boundaries more likely to occur when a PE occurs after a period of low PE vs. high PE? 8/N
1
0
0
@manojneuro
Manoj Kumar @[email protected]
3 years
We can also look at other measures related to PE like entropy (uncertainty at a particular time point) and change in stimulus features (operationalized as the change in GPT-2’s hidden representation) 7/N
1
0
0
@manojneuro
Manoj Kumar @[email protected]
3 years
But another way to operationalize PE is Bayesian surprise: the change in the entire predictive distribution over words from t-1 to t, measured using KL divergence 6/N
1
0
1
@manojneuro
Manoj Kumar @[email protected]
3 years
Importantly, we can compare different “flavors” of PE to see how strongly they relate to event boundaries. One commonly-used PE measure is surprisal: the predicted probability (at time t-1) of the word shown at time t … 5/N
1
0
0