tweetsatpreet Profile Banner
Satpreet Singh Profile
Satpreet Singh

@tweetsatpreet

Followers
3K
Following
18K
Media
61
Statuses
2K

AI x Neuro/Bio Postdoc @harvardmed @KempnerInst; PhD @UW; Ex @Meta @LinkedIn.

🌎
Joined October 2010
Don't wanna be here? Send us removal request.
@tweetsatpreet
Satpreet Singh
4 years
1/n Excited to share our new preprint where we study turbulent plume tracking using deep reinforcement learning (DRL) trained RNN *agents* and find many intriguing similarities with flying insects. w/ @FlorisBreugel @RajeshPNRao @bingbrunton; #tweeprint @flypapers #Drosophila
8
43
175
@oritpeleg
Orit Peleg
1 month
More on collective behavior: Our new Annual Review of Biophysics piece - with the stellar Danielle Chase - explores how animals sense, share information, and make group decisions. In honeybees and beyond 🐝 https://t.co/UcuG35gUu5
Tweet media one
15
459
2K
@SussilloDavid
David Sussillo
1 month
Coming March 17, 2026! Just got my advance copy of Emergence — a memoir about growing up in group homes and somehow ending up in neuroscience and AI. It’s personal, it’s scientific, and it’s been a wild thing to write. Grateful and excited to share it soon.
Tweet media one
16
38
319
@PashavanBijlert
Pasha van Bijlert
8 months
I promised I’d make a thread about it, so here goes - Are you interested in horses? Musculoskeletal modelling? Predictive simulations of quadrupedal gaits? Then this paper is for you! This made the cover of @ICB_journal ! With @tgeijten @Anneschulp Ineke Smit & Karl Bates
5
47
203
@SuryaGanguli
Surya Ganguli
2 months
If you like scaling laws in AI you’ll love scaling laws in biology, like allometric scaling of energy production density as a power law with exponent -1/4 across 100’s of millions of years of evolution. Also a new and similar scaling law for sleep - connecting it to metabolism!
@fedichev
Peter Fedichev
2 months
As you know I'm obsessed with power laws in biology, which is a biological consequence of fundamental principles, like energy conservation from the first law of thermodynamics. Geoffrey West showed how highly optimized biological networks—think blood vessels or respiratory
Tweet media one
3
16
121
@t_andy_keller
Andy Keller
2 months
Why do video models handle motion so poorly? It might be lack of motion equivariance. Very excited to introduce: Flow Equivariant RNNs (FERNNs), the first sequence models to respect symmetries over time. Paper: https://t.co/dkk43PyQe3 Blog: https://t.co/I1gpam1OL8 1/🧵
7
72
396
@NeurIPSConf
NeurIPS Conference
2 months
NeurIPS is pleased to officially endorse EurIPS, an independently-organized meeting taking place in Copenhagen this year, which will offer researchers an opportunity to additionally present their accepted NeurIPS work in Europe, concurrently with NeurIPS. Read more in our blog
11
114
794
@AdaFang_
Ada Fang
2 months
Announcing AI for Science at @NeurIPSConf 2025! Join us in discussing the reach and limits of AI for scientific discovery 🚀 📍 Workshop submission deadline: Aug 22 💡 Dataset proposal competition: more details coming soon! ✨ Amazing line up of speakers and panelists
Tweet media one
3
27
155
@eigensteve
Steven Brunton
2 months
New Book & Video Series!!! (late 2025) Optimization Bootcamp: Applications in Machine Learning, Control, and Inverse Problems Comment for a sneak peak to help proofread and I'll DM (proof reading, typos, HW problems, all get acknowledgment in book!)
244
203
2K
@Napoolar
Thomas Fel
2 months
At ICML for the next 2 days to present multiple works, if you're into interpretability, complexity, or just wanna know how cool @KempnerInst is, hit me up 👋
Tweet media one
Tweet media two
Tweet media three
Tweet media four
2
14
85
@cogscikid
Wilka Carvalho
2 months
Excited to share a new project spanning cognitive science and AI where we develop a novel deep reinforcement learning model---Multitask Preplay---that explains how people generalize to new tasks that were previously accessible but unpursued.
Tweet media one
2
7
38
@qiyang_li
Qiyang Li
2 months
Everyone knows action chunking is great for imitation learning. It turns out that we can extend its success to RL to better leverage prior data for improved exploration and online sample efficiency! https://t.co/J5LdRRYbSH The recipe to achieve this is incredibly simple. 🧵 1/N
3
69
364
@KanakaRajanPhD
Kanaka Rajan
2 months
(1/7) New preprint from the Rajan Lab! 🧠🤖 @RyanPaulBadman1 & @SimmonsEdler show–through cog sci, neuro & ethology-how an AI agent with fewer ‘neurons’ than an insect can forage, find safety & dodge predators in a virtual world. Here's what we did Paper: https://t.co/DvRKjERrGl
Tweet media one
4
13
64
@kpal_koyena
Koyena Pal
2 months
🚨 Registration is live! 🚨 The New England Mechanistic Interpretability (NEMI) Workshop is happening August 22nd 2025 at Northeastern University! A chance for the mech interp community to nerd out on how models really work 🧠🤖 🌐 Info: https://t.co/mXjaMM12iv 📝 Register:
Tweet media one
3
28
107
@neuralink
Neuralink
2 months
Watch the latest update from the Neuralink team.
1K
3K
13K
@MartinKlissarov
Martin Klissarov
2 months
As AI agents face increasingly long and complex tasks, decomposing them into subtasks becomes increasingly appealing. But how do we discover such temporal structure? Hierarchical RL provides a natural formalism-yet many questions remain open. Here's our overview of the field🧵
12
64
281
@sizhe_lester_li
Lester Li
2 months
Now in Nature! 🚀 Our method learns a controllable 3D model of any robot from vision, enabling single-camera closed-loop control at test time! This includes robots previously uncontrollable, soft, and bio-inspired, potentially lowering the barrier of entry to automation! Paper:
Tweet media one
Tweet media two
5
71
428
@EkdeepL
Ekdeep Singh
2 months
This collab was one of the most beautiful papers I've ever worked on! The amount I learned from @danielwurgaft was insane and you should follow him to inherit some gems too :D
@danielwurgaft
Daniel Wurgaft
2 months
🚨New paper! We know models learn distinct in-context learning strategies, but *why*? Why generalize instead of memorize to lower loss? And why is generalization transient? Our work explains this & *predicts Transformer behavior throughout training* without its weights! 🧵 1/
1
2
22
@EkdeepL
Ekdeep Singh
2 months
🚨New paper! We know models learn distinct in-context learning strategies, but *why*? Why generalize instead of memorize to lower loss? And why is generalization transient? Our work explains this & *predicts Transformer behavior throughout training* without its weights! 🧵 1/
9
64
348
@chingfang17
Ching Fang (chingfang.bsky.social)
3 months
Humans and animals can rapidly learn in new environments. What computations support this? We study the mechanisms of in-context reinforcement learning in transformers, and propose how episodic memory can support rapid learning. Work w/ @KanakaRajanPhD:
Tweet card summary image
arxiv.org
Humans and animals show remarkable learning efficiency, adapting to new environments with minimal experience. This capability is not well captured by standard reinforcement learning algorithms...
7
59
251