JustAnSVD
@DavidSabatini2
Followers
637
Following
4K
Media
16
Statuses
538
Computational neuroscientist. Interested in computation using analog dynamical systems.
Under a rock
Joined July 2018
In some sense, the ubiquity of the Einstein summation convention in mathematics is a testament to the fact that vast swathes of differential and algebraic geometry, functional analysis, analysis of PDEs, etc. can be usefully reformulated in terms of zillions of dot products.
18
22
426
The unbearable slowness of being: Humans still clock in at just 10 bits/s. Even after peer review :) Share link for Neuron: https://t.co/P4vZlpwZWG, ArXiv:
arxiv.org
This article is about the neural conundrum behind the slowness of human behavior. The information throughput of a human being is about 10 bits/s. In comparison, our sensory systems gather data at...
13
63
269
I'm excited to share our #NeurIPS2024 paper, "Modeling Latent Neural Dynamics with Gaussian Process Switching Linear Dynamical Systems" 🧠✨ We introduce the gpSLDS, a new model for interpretable analysis of latent neural dynamics! 🧵 1/10
2
17
136
I'm excited to share our #NeurIPS2024 paper with @jsoldadomagrane @SmithLabNeuro @YuLikeNeuro! We develop a new brain stimulation framework (MiSO) to drive neural population activity toward specified states. Paper: https://t.co/1VHPSHDu8d Poster Session 4 East, Dec 12 16:30 [1/n]
4
37
178
The first piece of work from my PhD has now been uploaded to biorXiv! Short 🧵below:
Hunger modulates exploration through suppression of dopamine signaling in the tail of striatum https://t.co/73YFPgs3i4
#biorxiv_neursci
7
16
57
1/10 Very excited to announce that my thesis project is now a preprint! We present the first detailed study of mental imagery in human ventral temporal cortex, bringing together the interests of @doristsao and @UeliRutishauser.
biorxiv.org
Mental imagery is a remarkable phenomenon that allows us to remember previous experiences and imagine new ones. Animal studies have yielded rich insight into mechanisms for visual perception, but the...
6
52
236
🌟Announcing NeurIPS spotlight paper on the transition from lazy to rich🔦 We reveal through exact gradient flow dynamics how unbalanced initializations promote rapid feature learning co-led @AllanRaventos and @ClementineDomi6 @FCHEN_AI @klindt_david @SaxeLab @SuryaGanguli
5
41
238
How much info in population activity on latents? I'm presenting at @eusipco2024 Jeon, H., & Park, I. M. Quantifying Signal-to-Noise Ratio in Neural Latent Trajectories via Fisher Information. European Signal Processing Conference https://t.co/gvaFJt5z2g
#compneuro #tweeprint
2
11
44
New preprint post! We show that motor commands in the superior colliculus shift the internal representation of heading during REM sleep despite the immobility of sleeping mice. Thus, the brain simulates actions and their consequences during REM sleep.🧵1/7 https://t.co/KS6GX8smYx
biorxiv.org
Vivid dreams mostly occur during a phase of sleep called REM[1][1]–[5][2]. During REM sleep, the brain’s internal representation of direction keeps shifting like that of an awake animal moving...
13
102
426
🚨 New paper alert! Have you ever suspected that spikes, Dale's law, and E/I balance might be more than just biological constraints, but rather fundamental to how brains compute? Check out my latest work with Christian Machens @Neuro_CF: https://t.co/5lZ8TlDmIE 🧵 (1/5)
4
31
94
A unified Fourier slice method to derive ridgelet transform for a variety of depth-2 neural networks.
arxiv.org
To investigate neural network parameters, it is easier to study the distribution of parameters than to study the parameters in each neuron. The ridgelet transform is a pseudo-inverse operator that...
0
1
1
Concentration of measure, a notion from probability that is oddly little-known in neurotheory, can explain how a heterogeneous population of spiking neurons can approximate rate-based dynamics. This is shown in our new preprint https://t.co/rz8B88ZjHK from @compneuro_epfl.
5
32
123
I'm a neural population kind of guy, but single neurons still never get old.
5
7
161
I'd like suggestions on the best "pure theory" paper you've read. Clearest explained, best written, etc, one caveat is there can be NO data. Just theory
1
0
1
Is a universal brain decoder possible? Can we train a decoding system that easily transfers to new individuals/tasks? Check out our #NeurIPS2023 paper where we show that it’s possible to transfer from a large pretrained model to achieve SOTA 🧠! Link: https://t.co/0Iebjpt4TM 🧵
14
201
820
(1/n) Neural circuits can learn a task in different ways. We study how the initial connectivity structure shapes learning by drawing on the rich and lazy learning theory. Preprint: https://t.co/KPLgZBknmE w/ A. Baratin, @JHCornford, @Stefan_Mihalas, E. Shea-Brown, @g_lajoie_.
arxiv.org
In theoretical neuroscience, recent work leverages deep learning tools to explore how some network attributes critically influence its learning dynamics. Notably, initial weight distributions with...
1
25
138
Grokking as the Transition from Lazy to Rich Training Dynamics.
0
3
12
1/4) Are you fond of Gaussian processes but sick of the mean-reversion induced by the usual covariance functions? Have a look: "Stationarity without mean reversion: Improper Gaussian process regression and improper kernels" Preprint: https://t.co/6YWHfAX9cw
2
17
104
@SussilloDavid As David Sussillo always says, "VISUALIZE EVERYTHING!" Thinking about making that my lab moto
1
2
17
A circuit motif and mechanism for quickly and dynamically reorienting neural manifolds. Our proposal: Fast switching between neural subspaces is made possible by clustered inhibitory synapses. See the preprint here: https://t.co/N1W9LP73Wc
@arvin_neuro @tetzlab
biorxiv.org
Neural activity in the brain traces sequential trajectories on low dimensional subspaces. For flexible behavior, these neural subspaces must be manipulated and reoriented within tens of milliseconds....
1
10
29