Taylor Webb Profile
Taylor Webb

@TaylorWWebb

Followers
903
Following
557
Media
51
Statuses
426

Studying cognition in humans and machines.

Joined October 2017
Don't wanna be here? Send us removal request.
@TaylorWWebb
Taylor Webb
8 months
Excited to announce that I'll be starting a lab at the University of Montreal (psychology) and Mila (Montreal Institute of Learning Algorithms) starting summer 2025. More info to come soon, but I'll be recruiting grad students. Please share / get in touch if you're interested!
Tweet media one
20
36
221
@TaylorWWebb
Taylor Webb
1 month
RT @BrianOdegaard2: Led by postdoc Doyeon Lee and grad student Joseph Pruitt, our lab has a new Perspectives piece in PNAS Nexus: . "Metaco….
0
3
0
@TaylorWWebb
Taylor Webb
3 months
RT @MengdiWang10: 🚨 Discover the Science of LLM! We uncover how LLMs (Llama3-70B) achieve abstract reasoning through emergent symbolic mech….
0
34
0
@TaylorWWebb
Taylor Webb
4 months
RT @scychan_brains: New work led by @Aaditya6284:."Strategy coopetition explains the emergence and transience of in-context learning in tra….
0
8
0
@TaylorWWebb
Taylor Webb
6 months
RT @mikb0b: Why do pre-o3 LLMs struggle with generalization tasks like @arcprize? It's not what you might think. OpenAI o3 shattered the A….
0
73
0
@TaylorWWebb
Taylor Webb
7 months
Truly incredible results. I have been impressed with o1’s capabilities but certainly didn’t expect this leap.
@fchollet
François Chollet
7 months
Today OpenAI announced o3, its next-gen reasoning model. We've worked with OpenAI to test it on ARC-AGI, and we believe it represents a significant breakthrough in getting AI to adapt to novel tasks. It scores 75.7% on the semi-private eval in low-compute mode (for $20 per task
Tweet media one
0
0
7
@TaylorWWebb
Taylor Webb
7 months
RT @fchollet: Today OpenAI announced o3, its next-gen reasoning model. We've worked with OpenAI to test it on ARC-AGI, and we believe it re….
0
2K
0
@TaylorWWebb
Taylor Webb
7 months
RT @canondetortugas: Given a high-quality verifier, language model accuracy can be improved by scaling inference-time compute (e.g., w/ rep….
0
48
0
@TaylorWWebb
Taylor Webb
7 months
RT @Dongyu_Gong: Introducing our new work on mechanistic intepretability of LLM cognition🤖🧠: why do Transformer-based LLMs have limited wor….
0
4
0
@TaylorWWebb
Taylor Webb
7 months
RT @HopeKean: New paper with @alexanderdfung, @PramodRT9 , @jessica__chomik , @Nancy_Kanwisher, @ev_fedorenko on the representations that….
0
28
0
@TaylorWWebb
Taylor Webb
8 months
RT @ARTartaglini: 🚨 New paper at @NeurIPSConf w/ @Michael_Lepori! Most work on interpreting vision models focuses on concrete visual featu….
0
37
0
@TaylorWWebb
Taylor Webb
8 months
RT @Michael_Lepori: Even ducklings🐣can represent abstract visual relations. Can your favorite ViT? In our new @NeurIPSConf paper, we use me….
0
4
0
@TaylorWWebb
Taylor Webb
8 months
RT @MatthiasMichel_: In this new preprint @smfleming and I present a theory of the functions and evolution of conscious vision. This is a b….
0
17
0
@TaylorWWebb
Taylor Webb
8 months
RT @valentina__py: Open Post-Training recipes! . Some of my personal highlights:.💡 We significantly scaled up our preference data! (using m….
0
28
0
@TaylorWWebb
Taylor Webb
8 months
It would be great to have a precise enough formulation of ‘approximate retrieval’ for this hypothesis to be rigorously tested. There is a concern that virtually any task can be characterized in this way, by appealing to a vague notion of similarity with other tasks.
@rao2z
Subbarao Kambhampati (కంభంపాటి సుబ్బారావు)
8 months
On the fallacy of "If it ain't strictly retrieval, it must be reasoning" argument. #SundayHarangue (on Wednesday). There is a tendency among some LLM researchers to claim that LLMs must be somehow capable of doing some sort of reasoning since they are after all not doing the.
1
0
12
@TaylorWWebb
Taylor Webb
8 months
This looks like a very useful and important contribution!.
@LauraRuis
Laura Ruis
8 months
How do LLMs learn to reason from data? Are they ~retrieving the answers from parametric knowledge🦜? In our new preprint, we look at the pretraining data and find evidence against this:. Procedural knowledge in pretraining drives LLM reasoning ⚙️🔢. 🧵⬇️
Tweet media one
0
0
5
@TaylorWWebb
Taylor Webb
8 months
RT @LauraRuis: How do LLMs learn to reason from data? Are they ~retrieving the answers from parametric knowledge🦜? In our new preprint, we….
0
209
0
@TaylorWWebb
Taylor Webb
8 months
RT @MillerLabMIT: More evidence that working memory is not persistent activity. Instead, it is dynamic on/off states with short-term synapt….
0
81
0
@TaylorWWebb
Taylor Webb
8 months
Fascinating paper from Paul Smolensky et al illustrating how transformers can implement a form of compositional symbol processing, and arguing that an emergent form of this may account for in-context learning in LLMs:
1
1
32
@TaylorWWebb
Taylor Webb
8 months
We find that VLMs behave very much like human vision when people are forced to respond quickly, thus relying on feedforward processing alone. This has implications for the source of difficulty in visual reasoning tasks, and suggests the need for object-centric approaches.
1
0
1