@KordingLab
Kording Lab 🦖
3 years
Research on interpreting units in artificial neural networks fails to be falsifiable. And just about everything that Matt Leavitt and @arimorcos say about the problem in ANNs is a problem in neuroscience.
7
41
170

Replies

@dileeplearning
Dileep George
3 years
@KordingLab @arimorcos That is comparing two very different settings: A lot of problems in neuroscience originate from partial observability, whereas the problems in ANN interpretations exist despite full observability.
2
1
10
@KordingLab
Kording Lab 🦖
3 years
@dileeplearning @arimorcos isn't it shocking that despite full observability and full causality it is a problem in ANNs? So even if we could overcome *all* experimental problems we would still be in trouble.
5
0
15
@leavittron
Matthew Leavitt
3 years
@KordingLab @arimorcos Ari and I were both trained as neuroscientists, so it's possible we were primed to see this problem after moving into a new research area
0
0
10
@ninsellab
NInSEL
3 years
@KordingLab @arimorcos Seems like two issues are being conflated (maybe). 1) low dimensional descriptions of units/networks poorly explain system operations, 2) all research in these areas should rely on strong, falsifiable hypotheses. The former is empirical and interesting, the latter I disagree with
2
0
4
@jeffrey_bowers
Jeffrey Bowers
3 years
@KordingLab @arimorcos Missing reference to Vision Research paper (Gale et al., 2020) that highlights how various metrics of selectivity, including generating images that drive single units, are misleading:
1
1
3
@unsorsodicorda
andrea panizza
3 years
@KordingLab @arimorcos At least in neuroscience @GaelVaroquaux built statistically valid interpretation tools 🙂
@GaelVaroquaux
Gael Varoquaux @GaelVaroquaux.bsky.social
3 years
New paper in @NeuroImage_EiC ✨ "Decoding with Confidence: Statistical Control on Decoder Maps" with Jerome-Alexis Chevalier, @_tbng , @salmonjsph & @BertrandThirion Why statistical control on decoder maps? What kind of control? How? 1/5 ⤵️
2
30
94
0
0
3
@DanzigerZachary
Zach Danziger
3 years
@KordingLab @arimorcos The analogy is even tighter with neural engineering. Often we can "get away" with omitting mechanism or falsifiable hypotheses because it's possible to build something that works and is helpful, so there is less pressure for understanding. DNN is the ultimate in useful and opaque
0
0
4
@ShahabBakht
Shahab Bakhtiari
3 years
@KordingLab @arimorcos Maybe I’m wrong, but I feel that neuroscientists are more aware of the limitations of this kind of approaches now, because they spent >50 years studying the visual system, tried to find optimal stimuli for every neuron, making tuning curves, etc, and it didn’t get them very far.
0
0
8