
Gerstner Lab
@compneuro_epfl
Followers
2K
Following
295
Media
8
Statuses
181
The Laboratory of Computational Neuroscience @EPFL_en studies models of #neurons, #networks of #neurons, #synapticplasticity, and #learning in the brain.
Lausanne, Switzerland
Joined March 2018
Our latest results (with @nickyclayton22) is now out in @NatureComms:.🥳. We propose a model of *28* behavioral experiments with food caching jays using a *single* neural network equipped with episodic-like memory and 3-factor RL plasticity rules. 1/6.
1
16
43
RT @sobeckerneuro: If you're at #foragingconference2024 , come check out our poster (#60) with @modirshanechi and @compneuro_epfl today!.Us….
0
3
0
RT @modirshanechi: I'm thrilled to share that I was recently awarded the @EPFL_en Dimitris N. Chorafas Foundation Award for my Ph.D. thesis….
0
1
0
RT @GauteEinevoll: Episode #22 in #TheoreticalNeurosciencePodcast: On 50 years with the Hopfield network model - with Wulfram Gerstner @….
0
19
0
RT @bsimsek13: 📢 I'm on the faculty job market this year! . My research explores the foundations of deep learning and analyzes learning and….
0
22
0
RT @modirshanechi: 🚨Preprint alert🚨. In an amazing collaboration with @GruazL53069, @sobeckerneuro, & J Brea, we explored a major puzzle in….
osf.io
'Why are we curious?' has been among the central puzzles of neuroscience and psychology in the past decades. A popular hypothesis is that curiosity is driven by intrinsically generated reward...
0
14
0
RT @roxana_zeraati: Headed to @BernsteinNeuro Conference this weekend and interested in how biological computation is performed across diff….
0
14
0
RT @schmutz_val: 1. Synaptic weight scaling in O(1/N) self-induces a form of (implicit) spatial structure in networks of spiking neurons, a….
arxiv.org
The dynamics of spatially-structured networks of $N$ interacting stochastic neurons can be described by deterministic population equations in the mean-field limit. While this is known, a general...
0
7
0
RT @FlaviohMar: Next Monday, I'll present how we exploit symmetries to identify weights of a black-box network to the EfficientML reading g….
sites.google.com
Today’s world needs orders of magnitude more efficient ML to address environmental and energy crises, optimize resource consumption and improve sustainability. With the end of Moore’s Law and Dennard...
0
1
0
RT @GraeffJohannes: And it's a book! Together with @okaysteve, we have gathered some of the leading experts in the field who have generousl….
0
47
0
RT @NatureComms: Approximation-free training method for deep SNNs using time-to-first-spike coding.
0
4
0
RT @BellecGuill: Today in @NatureComms . 📝. Open-puzzle: training event-based spiking neurons is mysteriously impossible. @Ana__Stan 👩🏻‍🔬….
nature.com
Nature Communications - To address challenges of training spiking neural networks (SNNs) at scale, the authors propose a scalable, approximation-free training method for deep SNNs using...
0
12
0
RT @FlaviohMar: 📕Recovering network weights from a set of input-output neural activations 👀.Ever wondered if this is even possible? 🤔. Chec….
0
12
0
RT @bsimsek13: Excited to share a blog post on our recent work ( on neural network distillation . .
bsimsek.com
It is important to understand how large models represent knowledge to make them efficient and safe. We study a toy model of neural nets that exhibits non-linear dynamics and phase transition....
0
8
0
RT @compneuro_epfl: Normative theories show that a surprise signal is necessary to speed up learning after an abrupt change in the environm….
journals.plos.org
Author summary Everybody knows the subjective feeling of surprise and behavioral reactions to surprising events such as startle response and pupil dilation are widely studied—but how can surprise...
0
11
0
Normative theories show that a surprise signal is necessary to speed up learning after an abrupt change in the environment; but how can such a speed-up be implemented in the brain? đź§ . We make a proposition in our new paper in @PLOSCompBiol.
journals.plos.org
Author summary Everybody knows the subjective feeling of surprise and behavioral reactions to surprising events such as startle response and pupil dilation are widely studied—but how can surprise...
1
11
32
RT @modirshanechi: What do we talk about when we talk about "curiosity"? 🤔.In our new paper in @TrendsNeuro (with @KacperKond, @compneuro_e….
0
60
0
RT @akaijsa: Excited that our new position piece is out! In this article, @summerfieldlab and I review three recent advances in using deep….
0
24
0
RT @Brainmind_EPFL: Intriguing new paper from the Gerstner lab proposes a theory for sparse coding and synaptic plasticity in cortical netw….
journals.plos.org
Author summary To understand how our brains carve out meaningful stimuli from a sea of sensory information, experimentalists often focus on individual neurons and their receptive fields; i.e., the...
0
4
0