@c_caucheteux
Charlotte Caucheteux @ICML24
3 years
šŸ”„Our work has now been accepted to NeurIPS 2022 !! `Toward a realistic model of speech processing in the brain with self-supervised learning’: https://t.co/weiGlaiD65 Let’s meet in New Orleans on Tue 29 Nov 2:30pm PST (Hall J #524). A recap of the 3 main results below šŸ‘‡
21
261
1K

Replies

@c_caucheteux
Charlotte Caucheteux @ICML24
3 years
Question: can a model trained on a *plausible* amount of *raw* speech explain both intelligent behavior and its brain bases? Here, we train wav2vec 2.0 w/ 600h of audio and map its activations onto the brains of 417 volunteers recorded with fMRI while listening to audio books.
1
1
27
@c_caucheteux
Charlotte Caucheteux @ICML24
3 years
Result 1: self-supervised learning suffices to make this algorithm learn brain-like representations (i.e. most brain areas significantly correlate with its activations in response to the same speech input). 2/n
1
3
26
@c_caucheteux
Charlotte Caucheteux @ICML24
3 years
Result 2: The hierarchy learnt by the algorithm maps onto the brain's: The auditory cortex is best aligned with the first layer of the transformer (blue), whereas the prefrontal cortex is best aligned with its deepest layers (red). 3/n
1
3
27
@c_caucheteux
Charlotte Caucheteux @ICML24
3 years
Result 3: With an additional 386 subjects, we show that wav2vec 2.0 learns both the speech-specific and the language-specific representations of the prefrontal and temporal cortices, respectively. 4/n
1
2
19
@c_caucheteux
Charlotte Caucheteux @ICML24
3 years
Conclusion: Modeling human-level intelligence is a far-off goal. Still, the emergence of brain-like functions in self-supervised algorithms suggests that we may be on the right path. 5/n
4
2
34
@c_caucheteux
Charlotte Caucheteux @ICML24
3 years
This is a joint work with our great team 🤩🤩 Juliette Millet, @PierreOrhan, Y Boubenec, @agramfort, E Dunbar, @chrplr and @JeanRemiKing, at @MetaAI, @ENS_ULM, @Inria & @Neurospin 6/n
1
2
17
@c_caucheteux
Charlotte Caucheteux @ICML24
3 years
šŸ™ Thanks to @samnastase, @HassonUri, John Hale, @nilearn, @pyvista and the open-source and open-science communities for making this possible! 7/7
2
1
24
@schulzb589
Ben Schulz
3 years
@c_caucheteux @samnastase @HassonUri @nilearn @pyvista Is there data for time stamping audio events? Similar to "Our Brains ā€œTime-Stampā€ Sounds to Process the Words We Hear" https://t.co/l4Zmu6iLMW.
0
0
1
@Mikerhinos
Michael Dufour (e/acc)
3 years
@c_caucheteux @samnastase @HassonUri @nilearn @pyvista "You wouldn't download a brain"šŸ˜… Seriously is it the 1st steps of it, and injecting some data into the brain to learn new languages just with an upload ?
0
0
0