AdamOsth Profile Banner
adam /otimes osth Profile
adam /otimes osth

@AdamOsth

Followers
464
Following
14K
Media
167
Statuses
2K

Associate professor @unimelb. Episodic memory, decision making, mathematical psychology. Friend to all cats. He/him.

Melbourne, Victoria
Joined December 2017
Don't wanna be here? Send us removal request.
@kareem_carr
Dr Kareem Carr
10 months
Nassim Taleb has written a devastatingly strong critique of IQ, but since he writes at such a technical level, his most powerful insights are being missed. Let me explain just one of them. ๐Ÿงต
163
695
6K
@AdamOsth
adam /otimes osth
1 year
Once these other types of matches were included in the model, it could account for variation in hit rates very well!
0
0
0
@AdamOsth
adam /otimes osth
1 year
To account for hit rates, we rely on the hybrid similarity framework. In this model, self similarities can be augmented through other types of matches. We were able to accomplish this through distinctiveness ratings.
1
0
0
@AdamOsth
adam /otimes osth
1 year
We found that the model gets a beautiful account of variation in false recognition rates across items. But hit rates? Not so much! This is because the self similarity is always 1 and does not vary across items.
1
0
0
@AdamOsth
adam /otimes osth
1 year
In this work, we attempted to jointly account for item memorability and a hallmark of context dependence, the category length effect. Increases in category size naturally produce larger false recognition rates. Models account for this easily because the matches to memory increase
1
0
0
@AdamOsth
adam /otimes osth
1 year
This naturally varies between items - items that are more similar to other items yield stronger memory signals. What's interesting is this can explain memorability without any recourse to it being a stimulus property. Memorability in this framework is context dependent.
1
0
0
@AdamOsth
adam /otimes osth
1 year
Models of recognition - such as the generalized context model (GCM) - have already been able to model memory at the item level for some time! In these models, the memory signal is the summed similarity between the retrieval cue and the contents of memory.
1
0
0
@AdamOsth
adam /otimes osth
1 year
A lot of studies have focused on memorability for individual items. But what the heck is memorability anyway? In this preprint with Rob Nosofsky, we attempt a theoretical account using models of recognition memory.
2
1
7
@ItaiYanai
Itai Yanai
1 year
Doing good science is 90% finding a science buddy to constantly talk to about the project.
180
3K
18K
@JaceSerrano
Jace Serrano
1 year
Haley Joel Osment should win an Emmy for his portrayal of JD Vance trying to buy a donut
1K
22K
165K
@JuiceSimpsons
JuiceOne
1 year
@hausofdecline
Haus of Decline
1 year
"The mayor of New York has been arrested for corruption" sounds like the preamble that would appear before a 90s beat-em-up game, and I'm here for it. I'm ready to fight identical guys and gain health with street chicken. I'm going to change my name to "Blaze."
340
15K
88K
@NoCatsNoLife_m
No Cats No Life
1 year
130
6K
51K
@Criminalsimpson
Criminalsimpsons
1 year
Best newspaper headline/articles in The Simpsons ๐Ÿ—ž๏ธ - a thread
124
2K
27K
@timothyjballard
Timothy Ballard
1 year
Come join our awesome lab! We're recruiting a PhD student to join an exciting project that uses computational modelling and machine learning to understand the hidden costs of long working hours on mental well-being. Full details here:
0
3
15
@AdamOsth
adam /otimes osth
1 year
our model can also account for variability across items... sort of! the representations enable predictions at the item level. we did pretty well with the semantic DRM task but got only weak-to-moderate correspondence with the perceptual DRM task.
0
0
0
@AdamOsth
adam /otimes osth
1 year
the model is tractable and we were able to fit individual subjects! what's interesting about this is that false recognition actually varies considerably across subjects. some subjects show almost no false recognition at all. our model can account for that variability very well.
1
0
0
@AdamOsth
adam /otimes osth
1 year
what's cool about using the LBA process is that it naturally provides an account of higher false recognition rates under time pressure. most studies previously have assumed that recollection is affected, but our work shows that most of it is a speed-accuracy tradeoff.
1
0
0
@AdamOsth
adam /otimes osth
1 year
we *also* captured complete RT distributions! we had to collect our own data, and some pretty big datasets at that, as we needed lots of critical lure trials in order to get stable estimates of the RT distributions. RTs for both responses were captured using an LBA process.
1
0
0
@AdamOsth
adam /otimes osth
1 year
our model can capture *both* semantic and perceptual DRM errors. we used word2vec to capture semantic confusions and open bigrams to capture perceptual confusions. both types of representations are retrieved in a global matching framework.
1
0
0
@AdamOsth
adam /otimes osth
1 year
what folks don't always appreciate is that there is also a *perceptual* DRM paradigm, where there are similar errors when subjects study similar looking or sounding words - studying "fat", "that", and "cab" etc will often lead to high false recognition of the word "cat"
1
0
0