Haim Sompolinsky Profile
Haim Sompolinsky

@HSompolinsky

Followers
5K
Following
5
Media
11
Statuses
57

@Harvard Professor of MCB & Physics and Director of Swartz Program in Theoretical Neuroscience; @HebrewU Professor of Physics and Neuroscience (Emeritus)

Joined July 2018
Don't wanna be here? Send us removal request.
@HSompolinsky
Haim Sompolinsky
1 year
the title of LeCun's slide says it all.
Tweet media one
17
39
248
@HSompolinsky
Haim Sompolinsky
1 year
Thanks, Yann for the great inspiring talk at Harvard.
Tweet media one
4
5
131
@HSompolinsky
Haim Sompolinsky
1 year
RT @Isaac_Herzog: בגאווה ישראלית גדולה בירכתי היום את פרופסור חיים סומפולינסקי מהאוניברסיטה העברית בירושלים על זכייתו בפרס Lundbeck היוקרתי….
0
22
0
@HSompolinsky
Haim Sompolinsky
2 years
Topics include principles of early sensory processing; unsupervised and supervised learning; attractors, memory, and spatial functions in cortical circuits; noise, chaos, and neural coding; learning, representations, and cognition in deep neural networks in brains and AI. 3/3.
0
0
28
@HSompolinsky
Haim Sompolinsky
2 years
The course introduces analytical and numerical tools from information theory, dynamical systems, statistics, statistical physics, AI, and machine learning for the study of neural computation. 2/3.
1
0
25
@HSompolinsky
Haim Sompolinsky
2 years
My Harvard/Neuro 231, 2024 Edition begins soon. It explores contemporary brain theory spanning local neuronal circuits as well as deep neural networks. It examines the relationship between network structure, dynamics, and computation. 1/3.
8
10
153
@HSompolinsky
Haim Sompolinsky
2 years
The theory unites NTK and NNGP as two limits of the same underlying process. We introduce the Neural Dynamical Kernel (NDK), derive equations for the dynamics of the mean predictor of the network, and discuss implications for the problem of representational drift in neuroscience.
0
0
6
@HSompolinsky
Haim Sompolinsky
2 years
I am excited to announce a recent work by Yehonatan Avidan and Qianyi Li presenting an analytical theory for learning dynamics in infinitely wide neural network.
1
19
83
@HSompolinsky
Haim Sompolinsky
2 years
Jet Blue just entered the Book of Guinness for World Record on Greediness. It cancelled my flight due to weather conditions but refused to fully refund me. They charged me cancellation fee! Never underestimate the creative way companies such as Jet Blue chase your money! advice?.
5
0
6
@HSompolinsky
Haim Sompolinsky
3 years
Our model proposes a novel scheme for associative memory of temporal sequence. In contrast to using sequence attractors (Sompolinsky and Kanter, 1988),.here entire sequences are stored holistically as fixed points-a scheme that is robust to overlap between sequences.
Tweet media one
0
2
9
@HSompolinsky
Haim Sompolinsky
3 years
and then store multiple compressed vectors as fixed points in RNN. Retrieved fixed points is followed by decoding individual linked components. Thus, model consists of two memory systems: 'dictionary' for items and 'episodic' for their linked structures.
Tweet media one
1
1
13
@HSompolinsky
Haim Sompolinsky
3 years
However real-life memory must be able to store multiple knowledge structures each composed of connected building blocks, such as episodes, cognitive maps and temporal sequences. To tackle this problem we first compress structures into fixed-length distributed representations.
1
1
4
@HSompolinsky
Haim Sompolinsky
3 years
I am excited to announce the publication of the wonderful paper of Julia Steinberg on Associative memory of structured knowledge in scientific reports. Most neural models of associative memory store structureless knowledge as simple random patterns in RNNs.
2
15
113
@HSompolinsky
Haim Sompolinsky
3 years
RT @NatComputSci: We highlight a study by Ben Sorscher, @SuryaGanguli and @HSompolinsky in which they explore a quantitative theory of neur….
0
4
0
@HSompolinsky
Haim Sompolinsky
3 years
RT @SuryaGanguli: Our new paper @NeuroCellPress "A unified theory for the computational and mechanistic origins of grid cells" lead by Ben….
0
47
0
@HSompolinsky
Haim Sompolinsky
3 years
RT @SuryaGanguli: Our new paper in @PNASNews: "Neural representation representation geometry underlies few shot concept learning'' lead by….
0
66
0
@HSompolinsky
Haim Sompolinsky
3 years
Remarkably, we find that object manifolds in DNN visual feature layers also support zero shot learning of concepts from linguistic descriptors revealing geometric alignment of semantic features in Word2Vec and high level visual features of the same concepts.
Tweet media one
0
5
21
@HSompolinsky
Haim Sompolinsky
3 years
And show that that they support highly accurate few-shot learning of novel visual concepts .and that and that variability in performance across concepts follow closely the predictions of our manifold geometric theory.
Tweet media one
1
1
9
@HSompolinsky
Haim Sompolinsky
3 years
Our work on manifold geometric theory underlying fast learning of novel concepts led by Ben Sorcher of the Ganguli Lab is out. We apply our theory to object manifold representations in deep neural network (DNN) and in macaque IT cortex.
2
38
187
@HSompolinsky
Haim Sompolinsky
3 years
Postdoc position in Sompolinsky Group: If you are interested in doing exciting postdoc research at Harvard at the forefront of computational neuroscience and the interface between natural and artificial intelligence, send application and 3 letters to hsompolinsky@mcb.harvard.edu.
2
62
129