__init_self Profile Banner
Victoria Bosch Profile
Victoria Bosch

@__init_self

Followers
403
Following
3K
Media
12
Statuses
134

neuromantic - ML and cognitive computational neuroscience - PhD student at Kietzmann Lab, Osnabrück University.

Joined October 2021
Don't wanna be here? Send us removal request.
@__init_self
Victoria Bosch
11 days
Introducing CorText: a framework that fuses brain data directly into a large language model, allowing for interactive neural readout using natural language. tl;dr: you can now chat with a brain scan 🧠💬 1/n
6
36
173
@martisamuser
Sushrut Thorat
11 days
LLMs have enabled interaction w/ various kinds of data (image/audio/math/action) through language—a true breakthrough of our times. Ofc, as neuroscientists we are curious if this extends to brain data. Yes, we can flexibly "read out" a lot! Limits remain to be seen.
@__init_self
Victoria Bosch
11 days
Introducing CorText: a framework that fuses brain data directly into a large language model, allowing for interactive neural readout using natural language. tl;dr: you can now chat with a brain scan 🧠💬 1/n
1
1
5
@__init_self
Victoria Bosch
11 days
Very excited to finally share this project with the world. Thanks to all co-authors! 🦾 @AnthesDaniel @AdrienDoerig @martisamuser @konigpeter @TimKietzmann /fin
0
0
4
@__init_self
Victoria Bosch
11 days
We are convinced that these results mark a shift from static neural decoding toward interactive, generative brain-language interfaces. Preprint:
Tweet card summary image
arxiv.org
Large language models (LLMs) have revolutionized human-machine interaction, and have been extended by embedding diverse modalities such as images into a shared language space. Yet, neural decoding...
1
3
9
@__init_self
Victoria Bosch
11 days
CorText also responds to in-silico microstimulations in line with experimental predictions: For example, when amplifying face-selective voxels for trials where no people were shown to the participant, CorText starts hallucinating them. With inhibition we can "remove people”. 7/n
2
0
4
@__init_self
Victoria Bosch
11 days
Following Shirakawa et al. (2025), we test zero-shot neural decoding: When entire semantic categories (e.g., zebras, surfers, airplanes) are withheld during training, the model can still give meaningful descriptions of the visual content. 6/n
1
0
4
@__init_self
Victoria Bosch
11 days
What can we do with it? For example, we can have CorText answer questions about a visual scene (“What’s in this image?” “How many people are there”?) that a person saw while in an fMRI scanner. CorText never sees the actual image, only the brain scan. 5/n
1
0
18
@__init_self
Victoria Bosch
11 days
By moving neural data into LLM token space, we gain open-ended, linguistic access to brain scans as experimental probes. At the same time, this has the potential to unlock many additional downstream capabilities (think reasoning, in-context learning, web-search, etc). 4/n
1
0
4
@__init_self
Victoria Bosch
11 days
To accomplish this, CorText fuses fMRI data into the latent space of an LLM, turning neural signal into tokens that the model can reason about in response to questions. This sets it apart from existing decoding techniques, which map brain data into static embeddings/output. 3/n
1
0
8
@__init_self
Victoria Bosch
11 days
Generative language models are revolutionizing human-machine interaction. Importantly, such systems can now reason cross-modally (e.g. vision-language models). Can we do the same with neural data - i.e., can we build brain-language models with comparable flexibility? 2/n
1
0
8
@JRaugel
Joséphine Raugel
2 months
Very pleased to share our latest study!
@JeanRemiKing
Jean-Rémi King
2 months
Can AI help understand how the brain learns to see the world? Our latest study, led by @JRaugel from FAIR at @AIatMeta and @ENS_ULM, is now out! 📄 https://t.co/y2Y3GP3bI5 🧵 A thread:
8
41
527
@AdrienDoerig
Adrien Doerig
3 months
🚨 Finally out in Nature Machine Intelligence!! "Visual representations in the human brain are aligned with large language models" https://t.co/GB5k6IV4Jg
Tweet card summary image
nature.com
Nature Machine Intelligence - Doerig, Kietzmann and colleagues show that the brain’s response to visual scenes can be modelled using language-based AI representations. By linking brain...
2
55
243
@ykamit
Yuki Kamitani
4 months
Our new study in @NatComputSci, led by Haibao Wang, presents a neural code converter aligning brain activity across individuals & scanners without shared stimuli by minimizing content loss, paving the way for scalable decoding and cross-site data analysis.
Tweet card summary image
nature.com
Nature Computational Science - A neural code conversion method is introduced using deep neural network representations to align brain data across individuals without shared stimuli. The approach...
2
38
123
@TimKietzmann
Tim Kietzmann
4 months
Exciting new preprint from the lab: “Adopting a human developmental visual diet yields robust, shape-based AI vision”. A most wonderful case where brain inspiration massively improved AI solutions. Work with @lu_zejin @martisamuser and Radoslaw Cichy https://t.co/XVYqQPjoTA
Tweet card summary image
arxiv.org
Despite years of research and the dramatic scaling of artificial intelligence (AI) systems, a striking misalignment between artificial and human vision persists. Contrary to humans, AI heavily...
5
51
145
@cogscikid
Wilka Carvalho
5 months
Excited to share project specifying a research direction I think will be particularly fruitful for theory-driven cognitive science that aims to explain natural behavior! We're calling this direction "Naturalistic Computational Cognitive Science"
4
47
218
@dyot_meet_mat
Mona
5 months
"The Garden of Forking Paths" By: Opus4🤖
3
3
19
@martin_hebart
Martin Hebart
5 months
Very happy to announce that our paper comparing dimensions in human and DNN representations is now out in @NatMachIntell https://t.co/RlaPcfrqLT
Tweet card summary image
nature.com
Nature Machine Intelligence - An interpretability framework that compares how humans and deep neural networks process images has been presented. Their findings reveal that, unlike humans, deep...
@martin_hebart
Martin Hebart
1 year
What makes humans similar or different to AI? In a new study, led by @florianmahner and @lukas_mut and w/ @umuguc, we took a deep look at the factors underlying their representational alignment, with surprising results. https://t.co/w1YNZhS6ZW 🧵
2
21
78
@seeingwithsound
The vOICe vision BCI 🧠🇪🇺
5 months
Beyond reorganization: Intrinsic cortical hierarchies constrain experience-dependent plasticity in sensory-deprived humans https://t.co/Mr3pkzsNIY "we confirm that auditory and speech related features are redirected to deprived visual cortices in blind individuals"; #neuroscience
1
24
65
@NatureHumBehav
Nature Human Behaviour
5 months
In this study, Lu et al. introduce All-Topographic Neural Networks (All-TNN) as a parsimonious model of the human visual cortex. https://t.co/jwALg8uztQ
Tweet card summary image
nature.com
Nature Human Behaviour - Lu et al. introduce all-topographic neural networks as a parsimonious model of the human visual cortex.
2
18
48