devonjarvi5 Profile Banner
Devon Jarvis Profile
Devon Jarvis

@devonjarvi5

Followers
151
Following
121
Media
8
Statuses
43

Lecturer in Theoretical ML at Wits University. Founder of the Cognition, Adaptation and Learning Lab (@caandl_lab). Google PhD Fellow and Commonwealth Scholar.

Johannesburg, South Africa
Joined September 2016
Don't wanna be here? Send us removal request.
@devonjarvi5
Devon Jarvis
3 months
Our paper, “Make Haste Slowly: A Theory of Emergent Structured Mixed Selectivity in Feature Learning ReLU Networks” will be presented at ICLR 2025 this week (! We derive closed-form dynamics for some, remarkably linear, feature learning ReLU networks (1/9).
openreview.net
In spite of finite dimension ReLU neural networks being a consistent factor behind recent deep learning successes, a theory of feature learning in these models remains elusive. Currently...
1
16
67
@devonjarvi5
Devon Jarvis
2 months
RT @raillabwits: Great news! This work has also been accepted into RLC for the Journal to Conference Track. Congrats @geraudnt and teams fo….
0
9
0
@devonjarvi5
Devon Jarvis
3 months
RT @kiradusterwald: Looking forward to presenting Tree-WSV for fast unsupervised ground metric learning at @iclr_conf 2025 tomorrow at 3 pm….
0
8
0
@devonjarvi5
Devon Jarvis
3 months
RT @ClementineDomi6: Had so much fun presenting our two posters at ICLR with @devonjarvi5 and @nico_anguita ! Thanks to everyone who came b….
0
5
0
@devonjarvi5
Devon Jarvis
3 months
RT @caandl_lab: CAandL Lab is at ICLR this week with 3 poster presentations. Come by and chat if you are interested! Congrats to all of the….
0
4
0
@devonjarvi5
Devon Jarvis
3 months
A huge thanks to my supervisors @kleinric, @BenjaminRosman and @SaxeLab for their constant assistance and guidance. Looking forward to chatting more on Thursday at 3pm, Poster #349! 👋 (9/9).
0
0
1
@devonjarvi5
Devon Jarvis
3 months
Finally, we provide an initial hidden layer clustering algorithm which is able to identify a ReLN for a given ReLU network, with the goal of enabling future work with our paradigm. (8/9)
Tweet media one
1
0
0
@devonjarvi5
Devon Jarvis
3 months
We consider the effect of adding depth to the ReLU network and find that the network still favours linear pathways with mixed-selective representations. However, there is variance in the dynamics and we show a corresponding GDLN can still model the distribution of dynamics. (7/9)
Tweet media one
1
0
0
@devonjarvi5
Devon Jarvis
3 months
We obtain closed-form dynamics for the ReLU network in this setting and see near perfect agreement with the predicted dynamics. We prove that the identified ReLN is unique and show that the mixed-selectivity preference remains when the number of contexts is increased. (6/9)
Tweet media one
1
0
0
@devonjarvi5
Devon Jarvis
3 months
We then consider a complex nonlinear, contextual task. We find that the ReLU network has an inductive bias towards mixed-selective latent representations where no hidden neuron is selective for an item or context. Instead it couples linear pathways to favour learning speed. (5/9)
Tweet media one
1
0
0
@devonjarvi5
Devon Jarvis
3 months
We demonstrate the paradigm on an extended XoR task which includes a third feature, making the dataset linearly separable. We observe a transition from the ReLU network using the linear strategy to the nonlinear strategy based on the magnitude of the new feature. (4/9)
Tweet media one
1
0
0
@devonjarvi5
Devon Jarvis
3 months
We define the Rectified Linear Network (ReLN) as the GDLN which has the same output as the ReLU network at all points in time and prove that a ReLN always exists for a given ReLU network. Thus, we obtain the dynamics of learning for the ReLU networks as well! (3/9)
Tweet media one
1
0
0
@devonjarvi5
Devon Jarvis
3 months
Our main idea is to map ReLU networks onto Gated Deep Linear Networks (GDLNs) (Saxe et al., 2022). GDLNs are neural networks which nonlinearly combine linear neural networks using gating operations. Importantly, GDLNs permit a dynamics reduction for their learning dynamics. (2/9)
Tweet media one
1
1
0
@devonjarvi5
Devon Jarvis
3 months
More information can be found at our website: If you are interested, keep an eye out for more to come and get in touch! 🙌.
0
0
7
@devonjarvi5
Devon Jarvis
3 months
I'm excited to announce that I have received the Thuthuka Research Grant from the @NRF_News! I will use this grant to start my own lab focused on comp neuro and ML theory at @WitsUniversity: the @caandl_lab! This is a joint effort with @VictoriaWi83762, @stefsmlab and @geraudnt.
Tweet media one
2
11
36
@devonjarvi5
Devon Jarvis
4 months
RT @ClementineDomi6: Our paper, “A Theory of Initialization’s Impact on Specialization,” has been accepted to ICLR 2025!..
0
20
0
@devonjarvi5
Devon Jarvis
5 months
RT @BlavatnikAwards: 2025 @Blavatnikawards UK 🇬🇧 Finalist Andrew Saxe from UCL was featured on the @BBC Science Focus Instant Genius Podcas….
Tweet card summary image
podcasts.apple.com
Podcast Episode · Instant Genius · 03/03/2025 · 21m
0
9
0
@devonjarvi5
Devon Jarvis
9 months
RT @Sahba_Besh: The countdown begins for our South African Brainhack!! Watch this space for ways to connect in virtually too! @CIFAR_News @….
0
6
0