tyler bonnen
@tylerraye
Followers
2K
Following
2K
Media
86
Statuses
567
neuroscientist @berkeley_ai. NIH K00 + UC Presidential Postdoctoral Fellow
Unceded Muwekma Ohlone Land
Joined August 2020
starting fall 2026 i'll be an assistant professor at @Penn 🥳 my lab will develop scalable models/theories of human behavior, focused on memory and perception currently recruiting PhD students in psychology, neuroscience, & computer science! reach out if you're interested😊
37
61
438
Amazing opportunity to work with a brilliant researcher and all-around wonderful person — definitely apply if you're interested in memory & perception at the intersection of AI & cognitive (neuro)science!
starting fall 2026 i'll be an assistant professor at @Penn 🥳 my lab will develop scalable models/theories of human behavior, focused on memory and perception currently recruiting PhD students in psychology, neuroscience, & computer science! reach out if you're interested😊
2
6
69
Experience speeds up to 400+ Mbps to enjoy 4K streaming on multiple devices at once, working from home effectively, online gaming, social media browsing, and more. Order online in under 2 minutes.
325
1K
5K
Excited to have Tyler join us next year and for future collaborations already in the works, great opportunity for grad students!
starting fall 2026 i'll be an assistant professor at @Penn 🥳 my lab will develop scalable models/theories of human behavior, focused on memory and perception currently recruiting PhD students in psychology, neuroscience, & computer science! reach out if you're interested😊
0
1
5
tyler is a wonderful mentor who genuinely cares about the people he works with - an all-around incredible collaborator. highly highly recommend working with tyler :D berkeley will miss him!
starting fall 2026 i'll be an assistant professor at @Penn 🥳 my lab will develop scalable models/theories of human behavior, focused on memory and perception currently recruiting PhD students in psychology, neuroscience, & computer science! reach out if you're interested😊
0
2
34
i'm in vancouver for #NeurIPS2024 presenting our 3D shape inference benchmark tomorrow! stop by poster #1210 at 4:30 on friday if you're interested and if you'd like to talk about neuro-ai, human cognition, or suggest nearby hikes, feel free to reach out!
do large-scale vision models represent the 3D structure of objects? excited to share our benchmark: multiview object consistency in humans and image models (MOCHI) with @xkungfu @YutongBAI1002 @thomaspocon @_yonifriedman @Nancy_Kanwisher Josh Tenenbaum and Alexei Efros 1/👀
0
7
67
happy to share that we'll be presenting this work at neurips 2024! 🥳 some *surprising* updates coming for the results soon 👀 but all the code/images/data are already available at project page: https://t.co/iLQ9iHN9k8 🤗: https://t.co/kMLWkI8hnD code:
github.com
Evaluating Multiview Object Correspondence between Humans and Image models - tzler/mochi_code
do large-scale vision models represent the 3D structure of objects? excited to share our benchmark: multiview object consistency in humans and image models (MOCHI) with @xkungfu @YutongBAI1002 @thomaspocon @_yonifriedman @Nancy_Kanwisher Josh Tenenbaum and Alexei Efros 1/👀
3
19
138
we're excited for others to build on our work! paper, code, images, and behavioral data are at: paper: https://t.co/ZylL4UzwSq code: https://t.co/tT5vmVWCAO 🤗: https://t.co/kMLWkI8hnD project page: https://t.co/iLQ9iHN9k8 10/10
huggingface.co
0
0
15
(if you want to know more about the neural structures and algorithms that support these human abilities, definitely check out our recent work: https://t.co/XvcFuNqlSb) 9.5/10
biorxiv.org
Perception unfolds across multiple timescales. For humans and other primates, many object-centric visual attributes can be inferred ‘at a glance’ (i.e., with < 200ms of visual information), an...
1
0
17
why might time be important for humans? when looking at an image (top row) people reliably attend to specific visual features (middle row) whereas model attention (bottom row) is more uniformly distributed these gaze dynamics provide clues about how we outperform models 9/10
1
0
11
given that both humans and models struggle with similar image sets, how is it that human performance is robust when models fail? here we compare model performance (x axis) to human reaction time (y axis): humans spend more time on trials where models fail 8/10
1
0
7
when analyzing choice behaviors across trials, model (x axis) and human (y axis) performance is correlated. that is, humans and models struggled with similar image sets 7/10
1
0
6
what enables humans to outperform models? we turn to more granular metrics to understand this human-model gap. specifically, we look at trial-level choice behaviors, reaction time, and gaze data 6/10
1
0
5
using our 3D vision benchmark, we evaluate multiple model classes (DINOv2, CLIP, MAE) across scales (base, large, giant). while DINOv2 performs best, and scaling (x axis) leads to improved accuracy (y axis), humans (dashed line) outperform all models by a wide margin 5/10
2
0
8
we generate 2K unique image sets composed of common and 'nonsense' objects that vary in difficulty; examples below from different stimulus groups below. we present these to >500 participants online and in lab, collecting 35K trials of behavioral data. 4/10
1
0
5
concretely, we use 'oddity' tasks (and several variants) e.g.: given a set of three images, observers must identify which object is different from the others, in spite of considerable viewpoint variation across images 3/10
1
0
7
cognitive science has developed incisive tasks to understand how humans represent visual objects here we leverage these tasks in order to evaluate large-scale vision models in terms of - object-level 3D shape inferences - comparison with human visual abilities 2/10
1
0
8
do large-scale vision models represent the 3D structure of objects? excited to share our benchmark: multiview object consistency in humans and image models (MOCHI) with @xkungfu @YutongBAI1002 @thomaspocon @_yonifriedman @Nancy_Kanwisher Josh Tenenbaum and Alexei Efros 1/👀
13
88
408
“Uncle, are you taking me to the graveyard?” asked the little girl who thought she was dead after an Israeli strike razed her home in Al-Bureij camp in Gaza. "No, darling. You are alive and beautiful like the moon,” the rescuer said.
105
6K
12K
As the Israeli military plunged Gaza into darkness last night, thousands took over NYC's Grand Central Station, staging an emergency sit-in during rush hour to demand an immediate ceasefire and an end to the Israeli government's bombing of Gaza.
99
1K
4K
truly a wonderful talk from @AlisonGopnik. when comparing humans to LLMs, she reminds us: there's no such thing as "general intelligence." our behaviors emerge from a concert of interacting cognitive functions. better to disentangle these systems than mythologize a construct
0
1
38