Tom George Profile
Tom George

@TomNotGeorge

Followers
848
Following
717
Media
66
Statuses
344

Neuroscience/ML PhD @UCL. NeuroAI, navigation, hippocampus

London
Joined January 2022
Don't wanna be here? Send us removal request.
@TomNotGeorge
Tom George
8 months
What are the brain’s “real” tuning curves? . Our new preprint "SIMPL: Scalable and hassle-free optimisation of neural representations from behaviour” argues that existing techniques for latent variable discovery are lacking. We suggest a much simpl-er way to do things. 1/21🧵
4
53
293
@TomNotGeorge
Tom George
4 months
A great article from SWC on our neural data analysis technique (now accepted into ICLR). More updates to come soon. plus will be @CosyneMeeting presenting this too!.
@SWC_Neuro
SWC
4 months
How does the brain represent imagined locations? . Researchers at SWC, @GatsbyUCL and @UCL developed SIMPL, a method to refine neural tuning curves by correcting distortions from imagined locations—sharpening our view of place cell activity. Read more:
Tweet media one
0
5
30
@TomNotGeorge
Tom George
6 months
Application deadline has been extend. Two more weeks to apply. don't miss out!!.
@trend_camina
TReND-CaMinA
6 months
Good news!🎉 The application deadline for TReND-CaMinA has been extended to ❗️31st January❗️.Don’t miss this chance to boost your computational neuroscience journey and become part of our community🧠✨ .Apply now: #Neuroscience #Education #ScienceForChange
Tweet media one
0
1
3
@TomNotGeorge
Tom George
7 months
🌍🧠💻Applications are well and truly open for the third CaMinA. African nationals studying biology, medicine, engineering, maths etc. can, and should, apply for this summer school in Zambia. Neuro and ML are changing the world and now I the time to get into them, please RT!.
@trend_camina
TReND-CaMinA
7 months
🎉Happy New Year! Start 2025 by investing in your future!🚀. Just 2 weeks left to apply for our Computational Neuroscience & Machine Learning course🧠🤖.Let’s make this year one for growth & discovery. RT to spread the word!🙌.🔗 #Growth #Opportunity.
0
6
15
@TomNotGeorge
Tom George
8 months
CaMinA is back for its 3rd year. this time we're going to beautiful Zambia🇿🇲!. I'm proud to see this course grow and bring together the smartest students across Africa with leading neuro institutes like @AllenInstitute and @SWC_Neuro. Applications open soon, please share widely!
Tweet media one
@trend_camina
TReND-CaMinA
8 months
🌍Exciting news! The 2025 TReND-CaMinA Course will be held in Lusaka, Zambia 🇿🇲 from July 7th–23rd. Dive into computational neuroscience and machine learning with us!. 📅 Applications open: December 15th.🔗More info: Stay tuned & spread the word! 🧠✨
Tweet media one
0
4
22
@TomNotGeorge
Tom George
8 months
RT @trend_camina: 🌍Exciting news! The 2025 TReND-CaMinA Course will be held in Lusaka, Zambia 🇿🇲 from July 7th–23rd. Dive into computationa….
0
36
0
@TomNotGeorge
Tom George
8 months
At the risk of rambling I'll end the thread here and perhaps do a deeper dive in the future. Give it a read (or better, try it on your data) and let us know your thoughts! . 21/21.
tomge.org
An efficient technique for optimising tuning curves starting from behaviour by iteratively refitting the tuning curves and redecoding the latent variables.
3
0
9
@TomNotGeorge
Tom George
8 months
This isn’t cheating, behaviour has always been there for the taking and we should exploit it. If we ignore behaviour and initialise randomly SIMPL still works but the latent space isn’t smooth and “identifiable”. This is certainly something to consider…. 20/21.
1
0
5
@TomNotGeorge
Tom George
8 months
Initialising at behaviour is a powerful trick here. In many regions (e.g., but not limited to, hippocampus 👀), a behavioural correlate (position👀) exists which is VERY CLOSE to the true latent. Starting right next to the global maxima help makes optimisation straightforward.
Tweet media one
1
0
6
@TomNotGeorge
Tom George
8 months
These non-local dynamics aren’t a new discovery by any means but this is, in our opinion, the correct and quickest way to find them. 18/21.
1
0
6
@TomNotGeorge
Tom George
8 months
And there’s cool stuff in the optimised latent too. It mostly tracks behaviour (hippocampus is still mostly a cognitive map) but does occasional big jumps as though the animal is contemplating another location in the environment. 17/21
Tweet media one
1
0
7
@TomNotGeorge
Tom George
8 months
Dubious analogy: Using behaviour alone to study neural representations (status quo for hippocampus) is like wearing mittens and trying to a figure out the shape of a delicate statue in the dark. Everything is blurred. 16/21.
1
1
6
@TomNotGeorge
Tom George
8 months
The old paradigm of “just smooth spikes against position” is wrong! Those aren’t tuning curves in a causal sense…they’re just smoothed spikes. These “real” tuning curves (the output of an algorithm like SIMPL) are the ones we should be analysing/theorising about. 15/21.
1
0
6
@TomNotGeorge
Tom George
8 months
It’s quite a sizeable effect. The median place cell has 23% more place fields. the median place field is 34% smaller and has a firing rate 45% higher. It’s hard to overstate this result…. 14/21
Tweet media one
1
0
5
@TomNotGeorge
Tom George
8 months
When applied to a similarly large (but now real) hippocampal dataset SIMPL optimises the tuning curves. “Real” place fields, it turns out, are much smaller, sharper, more numerous and more uniformly-distributed than previously thought. 13/21
Tweet media one
1
0
6
@TomNotGeorge
Tom George
8 months
SIMPL outperforms CEBRA — a contemporary, more general-purpose, neural-net-based technique — in terms of performance and compute-time. It’s over 30x faster. 12/21
Tweet media one
1
1
6
@TomNotGeorge
Tom George
8 months
Let’s test SIMPL: We make artificial grid cell data and add noise to the position (latent) variable. This noise blurs the grid fields out of recognition. Apply SIMPL and you recover a perfect estimate of the true trajectory and grid fields in a handful of compute-seconds. 11/21
Tweet media one
1
0
6
@TomNotGeorge
Tom George
8 months
I think this gif explains it well. The animal is "thinking" of the green location but located at the yellow. Spikes plotted against green give sharp grid fields but against yellow are blurred. In the brain this discrepancy will be caused by replay, planning, uncertainty and more.
1
2
10
@TomNotGeorge
Tom George
8 months
behaviour =/= latent. This is obvious in non-navigational regions. But for HPC/MEC/etc. it’s definitely often overlooked…behaviour alone explains the spikes SO well (read: grid cells look pretty) it’s common to just stop there. But that leaves some error. 9/21.
1
0
7
@TomNotGeorge
Tom George
8 months
In order to know the “true” tuning curves we need to know the “true” latent which passed through those curves to generate spikes. i.e. what was the animal thinking of…not what was the animal doing. This latent, of course, is often close to a behavioural readout such as position
1
1
10
@TomNotGeorge
Tom George
8 months
So what’s the idea inspiring this? Basically, tuning curves (defined as plotting spikes against behaviour) aren’t the brains “real” tuning curves in any causal sense. But often we analyse and theorise about them as though they are. That's a problem. 7/21.
1
2
8