compneuro_epfl Profile Banner
Gerstner Lab Profile
Gerstner Lab

@compneuro_epfl

Followers
2K
Following
293
Media
8
Statuses
180

The Laboratory of Computational Neuroscience @EPFL_en studies models of #neurons, #networks of #neurons, #synapticplasticity, and #learning in the brain.

Lausanne, Switzerland
Joined March 2018
Don't wanna be here? Send us removal request.
@compneuro_epfl
Gerstner Lab
2 years
Our latest results (with @nickyclayton22) is now out in @NatureComms: https://t.co/ryHpTSIFRX 🥳 We propose a model of *28* behavioral experiments with food caching jays using a *single* neural network equipped with episodic-like memory and 3-factor RL plasticity rules. 1/6
1
14
43
@compneuro_epfl
Gerstner Lab
9 months
The LCN is gradually moving away from X. You can follow our most recent news on our new account on BlueSky at
0
0
1
@sobeckerneuro
Sophia Becker
11 months
If you're at #foragingconference2024 , come check out our poster (#60) with @modirshanechi and @compneuro_epfl today! Using a unified computational framework and two open-access datasets, we show how novelty and novelty-guided behaviors are influenced by stimulus similarities😊🤩
1
2
7
@modirshanechi
Alireza Modirshanechi
11 months
I'm thrilled to share that I was recently awarded the @EPFL_en Dimitris N. Chorafas Foundation Award for my Ph.D. thesis, "Seeking the new, learning from the unexpected: Computational models of #surprise and #novelty in the #brain." Award news: https://t.co/X0ao8QeAkD
1
1
51
@GauteEinevoll
Gaute Einevoll
11 months
Episode #22 in #TheoreticalNeurosciencePodcast: On 50 years with the Hopfield network model - with Wulfram Gerstner @compneuro_epfl https://t.co/ZTRnkYQqLy John Hopfield received the 2024 Physics Nobel prize for his model published in 1982. What is the model all about?
1
18
70
@bsimsek13
Berfin Simsek
11 months
📢 I'm on the faculty job market this year! My research explores the foundations of deep learning and analyzes learning and feature geometry for Gaussian inputs. I detail my major contributions👇Retweet if you find it interesting and help me spread the word! DM is open. 1/n
1
23
78
@modirshanechi
Alireza Modirshanechi
1 year
🚨Preprint alert🚨 In an amazing collaboration with @GruazL53069, @sobeckerneuro, & J Brea, we explored a major puzzle in neuroscience & psychology: *What are the merits of curiosity⁉️* https://t.co/Au2HxPbZQL 1/7
1
13
60
@roxana_zeraati
Roxana Zeraati
1 year
Headed to @BernsteinNeuro Conference this weekend and interested in how biological computation is performed across different scales from single neurons to populations and whole-brain and even astrocytes and the whole body, drop by our workshop co-organized w/ @neuroprinciples
1
13
70
@schmutz_val
Valentin Schmutz
1 year
1. Synaptic weight scaling in O(1/N) self-induces a form of (implicit) spatial structure in networks of spiking neurons, as the number of neurons N tends to infinity. This is what D.T. Zhou, P.-E. Jabin and I prove in https://t.co/WbrqmDmvDp.
Tweet card summary image
arxiv.org
The dynamics of spatially-structured networks of $N$ interacting stochastic neurons can be described by deterministic population equations in the mean-field limit. While this is known, a general...
3
7
37
@FlaviohMar
Flavio Martinelli
1 year
Next Monday, I'll present how we exploit symmetries to identify weights of a black-box network to the EfficientML reading group 📒 Have a look if interested in Expand-and-Cluster: https://t.co/TjRaahKpm8 Thanks @osaukh for the invite!
Tweet card summary image
sites.google.com
This reading group examines the interplay between the theoretical foundations of deep learning and the practical challenge of making machine learning efficient. On the theory side, we study mathema...
@FlaviohMar
Flavio Martinelli
1 year
📕Recovering network weights from a set of input-output neural activations 👀 Ever wondered if this is even possible? 🤔 Check out Expand-and-Cluster, our latest paper at #ICML2024! Thu. 11:30 #2713 https://t.co/3AcQezuMqW A thread 🧵 ⚠️ Loss landscape and symmetries ahead ⚠️
0
1
9
@GraeffJohannes
Johannes Gräff
1 year
And it's a book! Together with @okaysteve, we have gathered some of the leading experts in the field who have generously contributed with a chapter of what has become the first ever book on #engram biology! 📖🔥🧠Come take a look! ⬇️⬇️⬇️ https://t.co/S00ThFw2ER
5
47
220
@NatureComms
Nature Communications
1 year
Approximation-free training method for deep SNNs using time-to-first-spike coding.
@BellecGuill
Guillaume Bellec
1 year
Today in @NatureComms . 📝 Open-puzzle: training event-based spiking neurons is mysteriously impossible. @Ana__Stan 👩🏻‍🔬 shows it become possible using theoretical equivalence between ReLU CNN and event-based CNN. Congrats ! 🧵 https://t.co/LiBFj3bg5h
0
3
10
@BellecGuill
Guillaume Bellec
1 year
Today in @NatureComms . 📝 Open-puzzle: training event-based spiking neurons is mysteriously impossible. @Ana__Stan 👩🏻‍🔬 shows it become possible using theoretical equivalence between ReLU CNN and event-based CNN. Congrats ! 🧵 https://t.co/LiBFj3bg5h
Tweet card summary image
nature.com
Nature Communications - To address challenges of training spiking neural networks (SNNs) at scale, the authors propose a scalable, approximation-free training method for deep SNNs using...
2
11
64
@FlaviohMar
Flavio Martinelli
1 year
📕Recovering network weights from a set of input-output neural activations 👀 Ever wondered if this is even possible? 🤔 Check out Expand-and-Cluster, our latest paper at #ICML2024! Thu. 11:30 #2713 https://t.co/3AcQezuMqW A thread 🧵 ⚠️ Loss landscape and symmetries ahead ⚠️
6
12
49
@bsimsek13
Berfin Simsek
2 years
Excited to share a blog post on our recent work ( https://t.co/Gj8ftn1n64) on neural network distillation https://t.co/XfBW2Vc07e If you liked toy models of superposition or pizza and clock papers, you might enjoy reading this blog post!
Tweet card summary image
bsimsek.com
It is important to understand how large models represent knowledge to make them efficient and safe. We study a toy model of neural nets that exhibits non-linear dynamics and phase transition....
1
8
38
@compneuro_epfl
Gerstner Lab
2 years
Normative theories show that a surprise signal is necessary to speed up learning after an abrupt change in the environment; but how can such a speed-up be implemented in the brain? 🧠 We make a proposition in our new paper in @PLOSCompBiol. https://t.co/OSTnEAq9Bg
Tweet card summary image
journals.plos.org
Author summary Everybody knows the subjective feeling of surprise and behavioral reactions to surprising events such as startle response and pupil dilation are widely studied—but how can surprise...
1
11
32
@compneuro_epfl
Gerstner Lab
2 years
Specifically, we propose a Spiking Neural Network model where E-I imbalance is used to extract a surprise signal for modulation of synaptic plasticity (via three-factor learning rules). This design connects high-level cognitive models of surprise to circuit-level mechanisms.
0
0
3
@compneuro_epfl
Gerstner Lab
2 years
Normative theories show that a surprise signal is necessary to speed up learning after an abrupt change in the environment; but how can such a speed-up be implemented in the brain? 🧠 We make a proposition in our new paper in @PLOSCompBiol. https://t.co/OSTnEAq9Bg
Tweet card summary image
journals.plos.org
Author summary Everybody knows the subjective feeling of surprise and behavioral reactions to surprising events such as startle response and pupil dilation are widely studied—but how can surprise...
1
11
32
@modirshanechi
Alireza Modirshanechi
2 years
What do we talk about when we talk about "curiosity"? 🤔 In our new paper in @TrendsNeuro (with @KacperKond, @compneuro_epfl & @sebhaesler), we address this question by reviewing the behavioral signatures, neural mechanisms, and comp. models of curiosity: https://t.co/W8SjPPsBf1
4
59
279
@akaijsa
Kai Sandbrink
2 years
Excited that our new position piece is out! In this article, @summerfieldlab and I review three recent advances in using deep RL to model cognitive flexibility, a hallmark of human cognition: https://t.co/FX52syfUCr (1/4)
3
24
73
@Brainmind_EPFL
EPFL Brain Mind Institute
2 years
Intriguing new paper from the Gerstner lab proposes a theory for sparse coding and synaptic plasticity in cortical networks to overcome spurious input correlations.
Tweet card summary image
journals.plos.org
Author summary To understand how our brains carve out meaningful stimuli from a sea of sensory information, experimentalists often focus on individual neurons and their receptive fields; i.e., the...
@compneuro_epfl
Gerstner Lab
2 years
Most methods of sparse coding or ICA assume the 'pre-whitening' of inputs. @cstein06 shows that this is not necessary with a smart local Hebbian learning rule and ReLU neurons! Paper just out in @PLOSCompBiol: https://t.co/d3cFN4oxOm
0
4
13