mattlark Profile Banner
Matthew Larkum Profile
Matthew Larkum

@mattlark

Followers
3K
Following
328
Media
34
Statuses
296

Experimental neuroscientist at Humboldt Uni Berlin, violinist, chamber music fanatic.

Germany
Joined July 2009
Don't wanna be here? Send us removal request.
@mattlark
Matthew Larkum
6 months
If the latter, then even a lifeless replay that's mechanical & unresponsive might be conscious. If the former, then consciousness requires more than the right outputs: it depends on possibility space, not just actual state transitions.
1
0
2
@mattlark
Matthew Larkum
6 months
The computational functionalist now faces a hard choice: either consciousness depends on what else the system could have done (its counterfactuals), or mere replay of identical steps is enough to generate experience.
1
0
2
@mattlark
Matthew Larkum
6 months
If the system behaves identically for the same input, but fails under different input, is it still computing? Or just enacting a fixed sequence? What’s left of the program when its flexibility, the capacity for alternatives, is gone?
1
0
0
@mattlark
Matthew Larkum
6 months
But give it a different input, and it breaks. Some transitions are inaccessible. The machine can’t reach certain states, it’s been hollowed out. The counterfactual space is gone. So what was doing the work: the path it took, or the paths it could have taken?
1
0
1
@mattlark
Matthew Larkum
6 months
In Turing Machine terms, the “program” follows exactly the same steps as before when given the same input. Every state transition, symbol write, and head movement unfolds identically. From the outside, it’s indistinguishable from genuine computation.
1
0
0
@mattlark
Matthew Larkum
6 months
So the causal structure is preserved. The same neurons fire in the same order for the same reasons. If the computational functionalist accepts the original run as conscious, must they accept the re-run too?
1
0
1
@mattlark
Matthew Larkum
6 months
Now we add a twist. In the biological version, the patch clamp only intervenes if needed, to keep the neuron's membrane potential on track. If the input is the same, no intervention occurs. The system runs as it did originally.
1
0
1
@mattlark
Matthew Larkum
6 months
Could a dancing head on a ticker tape be instantiating first-person experience? Isn’t this a degenerate “computation”? Perhaps it’s analogous to the unfolding argument, where appearance mimics process without doing the work:
1
0
1
@mattlark
Matthew Larkum
6 months
First, we do a forward replay, as in the original thought experiment. We ignore the transition function and simply move the head, write symbols to the tape, and recreate the sequence exactly, like hitting "play" on a recording.
1
0
1
@mattlark
Matthew Larkum
6 months
To clarify what’s going on, we model the same situation using a Universal Computing Device, also known as a Turing Machine. It follows transition rules to move between states and compute outputs, while the behaviour of the machine is recorded.
1
0
1
@mattlark
Matthew Larkum
6 months
So we assume a deterministic machine carries out this computation. We record from every neuron again, but this time, when we give the same input, the machine produces the same output anyway. No intervention needed.
1
0
1
@mattlark
Matthew Larkum
6 months
In the extension, it gets pushed into a delicious paradox by slightly changing the setup. A computational functionalist believes that the brain computes consciousness.
1
0
1
@mattlark
Matthew Larkum
6 months
This challenges the computational functionalist to identify the key aspect of brain activity that should feel like something. If the pattern is the same, and the structure is the same, what makes it conscious?
1
0
1
@mattlark
Matthew Larkum
6 months
Can neural computation feel like something? In our new paper, we explore a paradox: if you record and replay all the neural activity of a brain, down to each neuron, does replay create 1st-person experience?
frontiersin.org
Artificial neural networks are becoming more advanced and human-like in detail and behavior. The notion that machines mimicking human brain computations migh...
3
12
38
@mattlark
Matthew Larkum
1 year
Looking forward to the talk by @Sarah_D_Ayash next Tuesday, Sep 10 at 4 pm (CEST) for our @SFB1315 seminar series! All are welcome in person or via Zoom.
Tweet card summary image
sfb1315.de
Hosted by Matthew Larkum (A04, A10, Z, Speaker)
0
6
14
@TheBrunoCortex
Randy Bruno
1 year
This pandemic-era manuscript is now a Reviewed Preprint in eLife... Apical tufts can develop durable selectivity for behaviorally relevant stimuli (whether rewarded and unrewarded) during task learning. https://t.co/9J89uxOn44
@TheBrunoCortex
Randy Bruno
4 years
An apical tuft is one of the most prominent features of a pyramidal neuron. What do these mysterious subcellular compartments do during learning? A new study from @Sam_Benezra tracks the activity of the same apical tufts across learning of a behavior. 1/n
1
13
59
@mattlark
Matthew Larkum
2 years
Congratulations to Tim Zolnik who made a superhuman effort for this paper ( https://t.co/u8mEXaxGOP)! Thanks to the rest of the team including @BrittaEickholt and our Visiting Einstein Fellow @ZoltanMolnar64.
2
33
137
@SFB1315
Collaborative Research Center 1315
2 years
That's a wrap! Highlights include a PhD-postdoc session, posters, a themed discussion w. Richard Morris, talks by @Zoe_Chi_Ngo, @OwaldD, @DeetjeIggena, @YeeLeeShing1, @JamesPoulet & Brenda Milner awardee @ATzilivaki - thanks to all! https://t.co/qYneHcnJaN
0
7
20
@mattlark
Matthew Larkum
3 years
#PLOSBiology: Does brain activity cause consciousness? A thought experiment https://t.co/N2qIrUrp9L As a neurophysiologist I feel like I’m obliged to say ‘yes’. So why I am so conflicted by this thought experiment? @jaaanaru @LibedinskyLab @AnnaSchapiro
Tweet card summary image
journals.plos.org
The authors of this Essay examine whether action potentials cause consciousness in a three-step thought experiment that assumes technology is advanced enough to fully manipulate our brains.
13
59
194