Tim Kietzmann
@TimKietzmann
Followers
11K
Following
4K
Media
567
Statuses
6K
ML meets Neuroscience #NeuroAI, Full Professor at the Institute of Cognitive Science (@UniOsnabrueck), prev. @DondersInst, @Cambridge_Uni #neuroconnectionism
Joined October 2016
A long time coming, now out in Nature Machine Intelligence (@NatMachIntell): "Visual representations in the human brain are aligned with large language models." Check it out (and come chat with us about it at CCN).
🚨 Finally out in Nature Machine Intelligence!! "Visual representations in the human brain are aligned with large language models" https://t.co/GB5k6IV4Jg
1
3
41
We managed to integrate brain scans into LLMs for interactive brain reading and more. I think this is a big deal. Check out Vicky's post below for details. Super excited about this one!
Introducing CorText: a framework that fuses brain data directly into a large language model, allowing for interactive neural readout using natural language. tl;dr: you can now chat with a brain scan đź§ đź’¬ 1/n
2
13
100
🚀 Introducing ImagineArt for Teams - the feature you’ve all been waiting for. Remember when Netflix let you share your account? Yeah… we just did that for AI. First 200 people to comment “ImagineArt” get added to our official ImagineArt Team for Free!
286
388
1K
✨ Meet our speakers! ✨ Among our speakers this year at #SNS2025 we have Tim Kietzmann (@TimKietzmann) Read the abstract here 💬 👇 https://t.co/UQj8nPN81j
#neuroscience #compneuro #NeuroAI
0
3
7
This is going to revolutionize education 📚 Google just launched "Learn Your Way" that basically takes whatever boring chapter you're supposed to read and rebuilds it around stuff you actually give a damn about. Like if you're into basketball and have to learn Newton's laws,
188
2K
9K
Hi, we will have three NeuroAI postdoc openings (3 years each) to work with Sebastian Musslick (@smusslick), Pascal Nieters and myself on task-switching, replay, and visual information routing. Reach out if you are interested in any of the above, I'll be at CCN next week!
4
17
48
Do come and talk to us about any of the above and whatever #NeuroAI is on your mind. Excited for this upcoming #CCN2025, and looking forward to exchanging ideas with all of you. All posters can be found here:
kietzmannlab.org
0
1
6
And last but not least Fraser Smith's work on understanding how occluded objects are represented in visual cortex. Time: Tuesday, August 12, 1:30 – 4:30 pm Location: A66, de Brug & E‑Hall
1
0
4
Please also check out Songyun Bai's poster on further AVS findings that we were involved in: Neural oscillations encode context-based informativeness during naturalistic free viewing. Time: Tuesday, August 12, 1:30 – 4:30 pm Location: A165, de Brug & E‑Hall
1
0
2
Friday keeps on giving. Interested in representational drift in macaques? Then come check out @AnthesDaniel's work providing first evidence for a sequence of three different, yet comparatively stable clusters in V4. Time: August 15, 2-5pm Location: Poster C142, de Brug & E‑Hall
1
0
2
Another Friday feat: @PhilipSulewski and @thonor4's modelling work. Predictive remapping and allocentric coding as consequences of energy efficiency in RNN models of active vision Time: Friday, August 15, 2:00 – 5:00 pm, Location: Poster C112, de Brug & E‑Hall
1
0
3
Also on Friday, Victoria Bosch (@__init_self) presents her superb work on fusing brain scans with LLMs. CorText-AMA: brain-language fusion as a new tool for probing visually evoked brain responses Time: 2 – 5 pm Location: Poster C119, de Brug & E‑Hall https://t.co/F14znvfzer
1
0
2
On Friday, @carmen_amme has a talk & poster on exciting AVS analyses. Encoding of Fixation-Specific Visual Information: No Evidence of Information Carry-Over between Fixations Talk: 12:00 – 1:00 pm, Room C1.04 Poster: C153, 2:00 – 5:00 pm, de Brug &E‑Hall https://t.co/HROYJigJUV
1
0
2
Also on Tuesday, @RowanSommers will present our new WiNN architecture. Title: Sparks of cognitive flexibility: self-guided context inference for flexible stimulus-response mapping by attentional routing Time: August 12, 1:30 – 4:30 pm Location: A136, de Brug & E‑Hall
1
0
2
On Tuesday, Sushrut's (@martisamuser) Glimpse Prediction Networks will make their debut: a self-supervised deep learning approach for scene-representations that align extremely well with human ventral stream. Time: August 12, 1:30 – 4:30 pm Location: A55, de Brug & E‑Hall
1
0
5
In the "Modeling the Physical Brain" event, I will be speaking about our work on topographic neural networks. Time: Monday, August 11, 11:30 am – 6:00 pm Location: Room A2.07 Paper:
nature.com
Nature Human Behaviour - Lu et al. introduce all-topographic neural networks as a parsimonious model of the human visual cortex.
1
0
3
First, Zejin Lu @lu_zejin will talk about how adopting a human developmental visual diet yields robust, shape-based AI vision. Biological inspiration for the win! Talk Time/Location: Monday, 3-6 pm, Room A2.11 Poster Time/Location: Friday, 2-5 pm, C116 at de Brug & E‑Hall
1
1
6
OK, time for a CCN runup thread. Let me tell you about all the lab’s projects present at CCN this year. #CCN2025
1
9
33
Hey @Shutterstock, We recently purchased 200,000 API credits (about 2,000 euros) for your 3D Generative API service ( https://t.co/8NuaAahLDX). The service has been down for nearly a month. Any plans on when you will bring this back online? Thanks!
1
0
6
Exciting new preprint from the lab: “Adopting a human developmental visual diet yields robust, shape-based AI vision”. A most wonderful case where brain inspiration massively improved AI solutions. Work with @lu_zejin @martisamuser and Radoslaw Cichy https://t.co/XVYqQPjoTA
arxiv.org
Despite years of research and the dramatic scaling of artificial intelligence (AI) systems, a striking misalignment between artificial and human vision persists. Contrary to humans, AI heavily...
5
51
145
#AI and #computervision folks: this will be interesting to you! Robust visual inference beyond scale, but achieved by taking inspiration from how infant vision develops. Shown to work across many architectures and comes with a neat preprocessing pipeline that you can use.
Exciting new preprint from the lab: “Adopting a human developmental visual diet yields robust, shape-based AI vision”. A most wonderful case where brain inspiration massively improved AI solutions. Work with @lu_zejin @martisamuser and Radoslaw Cichy https://t.co/XVYqQPjoTA
1
2
17