adam /otimes osth
@AdamOsth
Followers
464
Following
14K
Media
167
Statuses
2K
Associate professor @unimelb. Episodic memory, decision making, mathematical psychology. Friend to all cats. He/him.
Melbourne, Victoria
Joined December 2017
Nassim Taleb has written a devastatingly strong critique of IQ, but since he writes at such a technical level, his most powerful insights are being missed. Let me explain just one of them. ๐งต
163
695
6K
Once these other types of matches were included in the model, it could account for variation in hit rates very well!
0
0
0
To account for hit rates, we rely on the hybrid similarity framework. In this model, self similarities can be augmented through other types of matches. We were able to accomplish this through distinctiveness ratings.
1
0
0
We found that the model gets a beautiful account of variation in false recognition rates across items. But hit rates? Not so much! This is because the self similarity is always 1 and does not vary across items.
1
0
0
In this work, we attempted to jointly account for item memorability and a hallmark of context dependence, the category length effect. Increases in category size naturally produce larger false recognition rates. Models account for this easily because the matches to memory increase
1
0
0
This naturally varies between items - items that are more similar to other items yield stronger memory signals. What's interesting is this can explain memorability without any recourse to it being a stimulus property. Memorability in this framework is context dependent.
1
0
0
Models of recognition - such as the generalized context model (GCM) - have already been able to model memory at the item level for some time! In these models, the memory signal is the summed similarity between the retrieval cue and the contents of memory.
1
0
0
A lot of studies have focused on memorability for individual items. But what the heck is memorability anyway? In this preprint with Rob Nosofsky, we attempt a theoretical account using models of recognition memory.
2
1
7
Doing good science is 90% finding a science buddy to constantly talk to about the project.
180
3K
18K
Haley Joel Osment should win an Emmy for his portrayal of JD Vance trying to buy a donut
1K
22K
165K
Best newspaper headline/articles in The Simpsons ๐๏ธ - a thread
124
2K
27K
Come join our awesome lab! We're recruiting a PhD student to join an exciting project that uses computational modelling and machine learning to understand the hidden costs of long working hours on mental well-being. Full details here:
0
3
15
our model can also account for variability across items... sort of! the representations enable predictions at the item level. we did pretty well with the semantic DRM task but got only weak-to-moderate correspondence with the perceptual DRM task.
0
0
0
the model is tractable and we were able to fit individual subjects! what's interesting about this is that false recognition actually varies considerably across subjects. some subjects show almost no false recognition at all. our model can account for that variability very well.
1
0
0
what's cool about using the LBA process is that it naturally provides an account of higher false recognition rates under time pressure. most studies previously have assumed that recollection is affected, but our work shows that most of it is a speed-accuracy tradeoff.
1
0
0
we *also* captured complete RT distributions! we had to collect our own data, and some pretty big datasets at that, as we needed lots of critical lure trials in order to get stable estimates of the RT distributions. RTs for both responses were captured using an LBA process.
1
0
0
our model can capture *both* semantic and perceptual DRM errors. we used word2vec to capture semantic confusions and open bigrams to capture perceptual confusions. both types of representations are retrieved in a global matching framework.
1
0
0
what folks don't always appreciate is that there is also a *perceptual* DRM paradigm, where there are similar errors when subjects study similar looking or sounding words - studying "fat", "that", and "cab" etc will often lead to high false recognition of the word "cat"
1
0
0