Gerstner Lab
@compneuro_epfl
Followers
2K
Following
293
Media
8
Statuses
180
The Laboratory of Computational Neuroscience @EPFL_en studies models of #neurons, #networks of #neurons, #synapticplasticity, and #learning in the brain.
Lausanne, Switzerland
Joined March 2018
Our latest results (with @nickyclayton22) is now out in @NatureComms: https://t.co/ryHpTSIFRX 🥳 We propose a model of *28* behavioral experiments with food caching jays using a *single* neural network equipped with episodic-like memory and 3-factor RL plasticity rules. 1/6
1
14
43
The LCN is gradually moving away from X. You can follow our most recent news on our new account on BlueSky at
0
0
1
If you're at #foragingconference2024 , come check out our poster (#60) with @modirshanechi and @compneuro_epfl today! Using a unified computational framework and two open-access datasets, we show how novelty and novelty-guided behaviors are influenced by stimulus similarities😊🤩
1
2
7
I'm thrilled to share that I was recently awarded the @EPFL_en Dimitris N. Chorafas Foundation Award for my Ph.D. thesis, "Seeking the new, learning from the unexpected: Computational models of #surprise and #novelty in the #brain." Award news: https://t.co/X0ao8QeAkD
1
1
51
Episode #22 in #TheoreticalNeurosciencePodcast: On 50 years with the Hopfield network model - with Wulfram Gerstner @compneuro_epfl
https://t.co/ZTRnkYQqLy John Hopfield received the 2024 Physics Nobel prize for his model published in 1982. What is the model all about?
1
18
70
📢 I'm on the faculty job market this year! My research explores the foundations of deep learning and analyzes learning and feature geometry for Gaussian inputs. I detail my major contributions👇Retweet if you find it interesting and help me spread the word! DM is open. 1/n
1
23
78
🚨Preprint alert🚨 In an amazing collaboration with @GruazL53069, @sobeckerneuro, & J Brea, we explored a major puzzle in neuroscience & psychology: *What are the merits of curiosity⁉️* https://t.co/Au2HxPbZQL 1/7
1
13
60
Headed to @BernsteinNeuro Conference this weekend and interested in how biological computation is performed across different scales from single neurons to populations and whole-brain and even astrocytes and the whole body, drop by our workshop co-organized w/ @neuroprinciples
1
13
70
1. Synaptic weight scaling in O(1/N) self-induces a form of (implicit) spatial structure in networks of spiking neurons, as the number of neurons N tends to infinity. This is what D.T. Zhou, P.-E. Jabin and I prove in https://t.co/WbrqmDmvDp.
arxiv.org
The dynamics of spatially-structured networks of $N$ interacting stochastic neurons can be described by deterministic population equations in the mean-field limit. While this is known, a general...
3
7
37
Next Monday, I'll present how we exploit symmetries to identify weights of a black-box network to the EfficientML reading group 📒 Have a look if interested in Expand-and-Cluster: https://t.co/TjRaahKpm8 Thanks @osaukh for the invite!
sites.google.com
This reading group examines the interplay between the theoretical foundations of deep learning and the practical challenge of making machine learning efficient. On the theory side, we study mathema...
📕Recovering network weights from a set of input-output neural activations 👀 Ever wondered if this is even possible? 🤔 Check out Expand-and-Cluster, our latest paper at #ICML2024! Thu. 11:30 #2713
https://t.co/3AcQezuMqW A thread 🧵 ⚠️ Loss landscape and symmetries ahead ⚠️
0
1
9
And it's a book! Together with @okaysteve, we have gathered some of the leading experts in the field who have generously contributed with a chapter of what has become the first ever book on #engram biology! 📖🔥🧠Come take a look! ⬇️⬇️⬇️ https://t.co/S00ThFw2ER
5
47
220
Approximation-free training method for deep SNNs using time-to-first-spike coding.
Today in @NatureComms . 📝 Open-puzzle: training event-based spiking neurons is mysteriously impossible. @Ana__Stan 👩🏻🔬 shows it become possible using theoretical equivalence between ReLU CNN and event-based CNN. Congrats ! 🧵 https://t.co/LiBFj3bg5h
0
3
10
Today in @NatureComms . 📝 Open-puzzle: training event-based spiking neurons is mysteriously impossible. @Ana__Stan 👩🏻🔬 shows it become possible using theoretical equivalence between ReLU CNN and event-based CNN. Congrats ! 🧵 https://t.co/LiBFj3bg5h
nature.com
Nature Communications - To address challenges of training spiking neural networks (SNNs) at scale, the authors propose a scalable, approximation-free training method for deep SNNs using...
2
11
64
📕Recovering network weights from a set of input-output neural activations 👀 Ever wondered if this is even possible? 🤔 Check out Expand-and-Cluster, our latest paper at #ICML2024! Thu. 11:30 #2713
https://t.co/3AcQezuMqW A thread 🧵 ⚠️ Loss landscape and symmetries ahead ⚠️
6
12
49
Excited to share a blog post on our recent work ( https://t.co/Gj8ftn1n64) on neural network distillation https://t.co/XfBW2Vc07e If you liked toy models of superposition or pizza and clock papers, you might enjoy reading this blog post!
bsimsek.com
It is important to understand how large models represent knowledge to make them efficient and safe. We study a toy model of neural nets that exhibits non-linear dynamics and phase transition....
1
8
38
Normative theories show that a surprise signal is necessary to speed up learning after an abrupt change in the environment; but how can such a speed-up be implemented in the brain? 🧠 We make a proposition in our new paper in @PLOSCompBiol. https://t.co/OSTnEAq9Bg
journals.plos.org
Author summary Everybody knows the subjective feeling of surprise and behavioral reactions to surprising events such as startle response and pupil dilation are widely studied—but how can surprise...
1
11
32
Specifically, we propose a Spiking Neural Network model where E-I imbalance is used to extract a surprise signal for modulation of synaptic plasticity (via three-factor learning rules). This design connects high-level cognitive models of surprise to circuit-level mechanisms.
0
0
3
Normative theories show that a surprise signal is necessary to speed up learning after an abrupt change in the environment; but how can such a speed-up be implemented in the brain? 🧠 We make a proposition in our new paper in @PLOSCompBiol. https://t.co/OSTnEAq9Bg
journals.plos.org
Author summary Everybody knows the subjective feeling of surprise and behavioral reactions to surprising events such as startle response and pupil dilation are widely studied—but how can surprise...
1
11
32
What do we talk about when we talk about "curiosity"? 🤔 In our new paper in @TrendsNeuro (with @KacperKond, @compneuro_epfl & @sebhaesler), we address this question by reviewing the behavioral signatures, neural mechanisms, and comp. models of curiosity: https://t.co/W8SjPPsBf1
4
59
279
Excited that our new position piece is out! In this article, @summerfieldlab and I review three recent advances in using deep RL to model cognitive flexibility, a hallmark of human cognition: https://t.co/FX52syfUCr (1/4)
3
24
73
Intriguing new paper from the Gerstner lab proposes a theory for sparse coding and synaptic plasticity in cortical networks to overcome spurious input correlations.
journals.plos.org
Author summary To understand how our brains carve out meaningful stimuli from a sea of sensory information, experimentalists often focus on individual neurons and their receptive fields; i.e., the...
Most methods of sparse coding or ICA assume the 'pre-whitening' of inputs. @cstein06 shows that this is not necessary with a smart local Hebbian learning rule and ReLU neurons! Paper just out in @PLOSCompBiol: https://t.co/d3cFN4oxOm
0
4
13