Ian Ellwood Profile
Ian Ellwood

@IanTEllwood

Followers
38
Following
1
Media
0
Statuses
14

Neuroscientist

Ithaca, NY
Joined March 2022
Don't wanna be here? Send us removal request.
@IanTEllwood
Ian Ellwood
6 months
Wrote my congressman and senators today to ask them to speak up against the sudden and capricious cancellation of study sections at the NIH. Delaying funding for essential research into human health is idiotic.
0
0
0
@IanTEllwood
Ian Ellwood
8 months
Based on these findings, we propose that dopamine release depends on the "total valence" of a stimulus, or S = reward + aversion.
0
0
0
@IanTEllwood
Ian Ellwood
8 months
We also found that omissions of expected stimuli do not trigger release and that several stimuli with little or no valence lead to minimal release even when they are engaging, showing that PFC dopamine is not simply a surprise signal.
1
0
0
@IanTEllwood
Ian Ellwood
8 months
Dopamine release in the prefrontal cortex (PFC) is fascinating because it occurs following both reward and aversion. Here we tested what happens when you mix reward and aversion and found that amount of release seemed to depend on the sum of the rewarding and aversive components.
1
0
0
@IanTEllwood
Ian Ellwood
2 years
One way of interpreting my paper is that, if the “handful of synapses” in Larkum’s formulation are selected via a comparison of pre and postsynaptic activity (which I call the “match and control principle” in the paper), you can implement transformer-like attention.
0
0
4
@IanTEllwood
Ian Ellwood
2 years
. while a handful of synapses several orders of magnitude less in number but with explosive impact dictate the firing of the neuron”.
1
0
3
@IanTEllwood
Ian Ellwood
2 years
When this project was in its infancy, a paper by Matthew Larkum, “Are dendrites conceptually useful” wrote of layer 2/3 pyramidal neurons, “The emerging picture is one where thousands of inputs combine to clamp the neuron at a particular subthreshold value, . .
1
0
4
@IanTEllwood
Ian Ellwood
2 years
In this paper, the attention computation is performed fully in parallel, using the large number of spines on apical dendrites to perform the many comparisons. The only limitation is that the number of spines on the dendrite caps the number of comparisons to hundreds.
1
0
4
@IanTEllwood
Ian Ellwood
2 years
The queries are then compared, one at a time, with the set of keys (N steps for N queries). This computation is linear in the number of keys and queries. It’s a great approach for sequence processing, but relatively slow if all the keys and queries must be processed at once.
1
0
4
@IanTEllwood
Ian Ellwood
2 years
Most “transformers in the brain” articles have tried to avoid the quadratic time computation of transformers (N keys compared with N queries requiring N^2 comparison) by first storing the keys in a neural network (N steps for N keys).
1
0
3
@IanTEllwood
Ian Ellwood
2 years
My article on how apical dendrites of pyramidal neurons can implement a computation similar to transformer attention is now published in PLOS Computational Biology
journals.plos.org
Author summary Many of the most impressive recent advances in machine learning, from generating images from text to human-like chatbots, are based on a neural network architecture known as the...
2
27
104
@IanTEllwood
Ian Ellwood
2 years
Tiny headsets for mouse VR! Great fun collaborating with the Schaffer-Nishimura lab on this project. Lead authors Matt David Isaacson and Hongyu Chang put in incredible effort to design, build and thoroughly test this system!.
researchsquare.com
We present MouseGoggles, a miniaturized virtual reality (VR) display for head-fixed mice that delivers independent, binocular visual stimulation over a wide field of view. Neural recordings in the...
0
3
12
@IanTEllwood
Ian Ellwood
2 years
It's still unclear if transformers have relevance to the brain, but in this paper I show that short-term Hebbian synaptic potentiation is enough to implement transformer-like attention:
Tweet card summary image
biorxiv.org
Transformers have revolutionized machine learning models of language and vision, but their connection with neuroscience remains tenuous. Built from attention layers, they require a mass comparison of...
0
1
1