Jean-Rémi King
@JeanRemiKing
Followers
7K
Following
2K
Media
235
Statuses
1K
Researcher @MetaAI AI - Neuroscience https://t.co/VCaLTUt9MH
Paris, France
Joined March 2010
Can AI help understand how the brain learns to see the world? Our latest study, led by @JRaugel from FAIR at @AIatMeta and @ENS_ULM, is now out! 📄 https://t.co/y2Y3GP3bI5 🧵 A thread:
31
307
2K
Sunday's random python trick: avoid the (frankly broken) debugging experience of Python notebooks: 1. print your notebook kernel import ipykernel as ipk; print(ipk.get_connection_file()) 2. access it with a separate ipython: jupyter console --existing $KERNEL_ID `debug`
0
0
7
🧠How does the hierarchy of speech representations unfolds in the human brain? Our latest work, led by @GwilliamsL, together with Alec Marantz and @DavidPoeppel, is now out in PNAS: https://t.co/c9dpHvRZCu
5
22
169
Interested in a masters-level internship to work with us? apply here:
🚨 Hiring: Master’s student intern 📅 Early 2026 | 📍 Rothschild Foundation Hospital, Paris 🧠 Looking for someone with experience in messy neural data (ideally sEEG) + deep learning 👤 Supervised by myself, in the team of @JeanRemiKing 🔗 Apply:
0
3
11
Happy to see our Brain2Qwerty paper featured in the State of AI Report 2025. Always exciting to watch brain decoding research find its way into broader AI discussions ! 🧠 To read it: https://t.co/xroRXpakyU
@LucyZ47712090 @JeanRemiKing @stephanedascoli
arxiv.org
Modern neuroprostheses can now restore communication in patients who have lost the ability to speak or move. However, these invasive devices entail risks inherent to neurosurgery. Here, we...
🪩The one and only @stateofaireport 2025 is live! 🪩 It’s been a monumental 12 months for AI. Our 8th annual report is the most comprehensive it's ever been, covering what you *need* to know about research, industry, politics, safety and our new usage data. My highlight reel:
1
1
4
Glad to be presenting some brain-AI alignment work today at @ELLISforEurope x @unireps 💫
📢 Save the date! Join us for the next @ELLISforEurope x @unireps Speaker Series! 📅8th October – 16:00 CEST 📍 https://t.co/iHc93nIQJ4 🎙️Speakers: Keynote Talk by @maxseitzer & Flash Talk by @JRaugel
0
2
13
If you're at #COLM2025, go check out our latest work on how syntax is represented in LLMs 👇
Happy to share our new work accepted at #COLM2025 Probing syntax in LLMs: successes & remaining challenges 🔗 https://t.co/f2T4xJO5yb Joint work with Emmanuel Chemla, @JeanRemiKing & @lakretz Check out the poster Wednesday afternoon (Poster #40) Follow the 🧵for the details!
0
2
12
Happy to share our new work accepted at #COLM2025 Probing syntax in LLMs: successes & remaining challenges 🔗 https://t.co/f2T4xJO5yb Joint work with Emmanuel Chemla, @JeanRemiKing & @lakretz Check out the poster Wednesday afternoon (Poster #40) Follow the 🧵for the details!
1
3
5
Interesting article from @DKaiserlab and Cichy's labs, with results similar with what we had observed 10 years ago : https://t.co/4oof9AUHgD
"Recurrence affects the geometry of visual representations across the ventral visual stream in the human brain" (Aug. 2025) by S. Xie, J. Singer, B. Ilmaz, D. Kaiser*, R. Cichy* 🔗 https://t.co/PGjzPPlgAi
1
1
34
📊 His research uses encoding and decoding approaches to show how modern speech and language models account for brain responses to natural speech, measured with EEG, MEG, iEEG, and fMRI, even in children aged 2 to 12. 📆 November 19–21, 2025. +info👇 https://t.co/MOOhj7DNMP
0
1
6
1/ You might have seen it—DINOv3 is out! 🦖🦕In this thread, we share key insights on our Gram anchoring ⚓︎ and how it helps to get smooth feature maps. 👇
9
63
494
Come join us in Toronto. @UofT is hiring a Professor of computational cognitive neuroscience. #neuroAI #compneuro
jobs.utoronto.ca
Professor - Computational Cognitive Neuroscience
0
28
55
🗣️Job alert: Our Brain and AI team at FAIR (@AIatMeta) is looking for a software engineer with experience in 3D rendering in the browser: https://t.co/UneZ0WFxIX Please RT 🙏
4
22
142
as well as the open source and open data #NeuroAI communities for making this possible!🙏
2
2
20
Thanks to all the great researchers who contributed to this project: @JRaugel, @MarcSzafraniec, @huyvvo, Camille Couprie, Seungeun Yi, @oriane_simeoni, @maxseitzer, @monsieurlabatut, @p_bojanowski, @valentinwyart, our institutions @AIatMeta and @ENS_ULM as well as
1
2
26
Overall, the training of DINOv3 mirror some striking aspects of brain development: late-acquired representations map onto the cortical areas with e.g. greater expansion and slower timescales, suggesting that DINOv3 spontaneously captures some of the neuro-developmental trajectory
1
2
35
→ Second factor: data type: Even models trained only on satellite or cellular images significantly capture brain signals — but the same model trained on standard images encodes higher all brain regions.
2
3
29
So what are the factors that lead DINOv3 to become brain-like? → 1st factor: Model size: bigger models become brain-like faster during training, reach higher brain-scores, especially in high-level brain regions.
1
5
42
Third, the representations of the visual cortex are typically acquired early on in the training of DINOv3. By contrast, it requires much more training to learn representations similar to those of the prefrontal cortex.
1
2
36
Surprisingly, these encoding, spatial and temporal scores all emerge across training, but at different speeds.
1
2
35
Second, DINOv3 learns a representational hierarchy which corresponds to the spatial and temporal hierarchies in the brain.
1
3
39