Weinan Sun Profile
Weinan Sun

@sunw37

Followers
790
Following
2K
Media
28
Statuses
889

Neuroscience, Artificial Intelligence, and Beyond. Assistant professor, Neurobiology and Behavior @CornellNBB

Joined February 2016
Don't wanna be here? Send us removal request.
@sunw37
Weinan Sun
10 months
1/12 How do animals build an internal map of the world? In our new paper, we tracked thousands of neurons in mouse CA1 over days/weeks as they learned a VR navigation task. @nspruston @HHMIJanelia, w/ co-1st author @JohanWinn Video summary: https://t.co/7iU0R4OVbf Paper:
6
54
216
@MLStreetTalk
Machine Learning Street Talk
12 days
Wouldn't it be great if neural networks knew a spiral... as a spiral? 🍥
@chrisoffner3d
Chris Offner
12 days
"Now, a neural network can approximate... or it can understand. Be the spiral, my friend."
6
4
116
@AdamMarblestone
Adam Marblestone
2 months
The US should do something equally big in brain circuit mapping https://t.co/qZerDAjbJf
1
6
15
@JohanWinn
Johan Winnubst
2 months
Incredibly proud to announce that today @E11BIO is releasing our first preprint together with accompanying open data and methods🚀 Here we show how our PRISM technology addresses the biggest bottlenecks in connectomics: tracing and sample fragility.
@Andrew_C_Payne
Andrew Payne
2 months
@E11BIO is excited to unveil PRISM technology for mapping brain wiring with simple light microscopes. Today, brain mapping in humans and other mammals is bottlenecked by accurate neuron tracing. PRISM uses molecular ID codes and AI to help neurons trace themselves. We discovered
1
1
11
@Andrew_C_Payne
Andrew Payne
2 months
@E11BIO is excited to unveil PRISM technology for mapping brain wiring with simple light microscopes. Today, brain mapping in humans and other mammals is bottlenecked by accurate neuron tracing. PRISM uses molecular ID codes and AI to help neurons trace themselves. We discovered
36
111
369
@tkipf
Thomas Kipf
2 months
Veo is a more general reasoner than you might think. Check out this super cool paper on "Video models are zero-shot learners and reasoners" from my colleagues at @GoogleDeepMind.
8
57
337
@AndrewLampinen
Andrew Lampinen
3 months
Why does AI sometimes fail to generalize, and what might help? In a new paper, we highlight the latent learning gap — which unifies findings from language model weaknesses to agent navigation — and suggest that episodic memory complements parametric learning to bridge it. Thread:
19
102
550
@LakeBrenden
Brenden Lake @ NeurIPS
4 months
Our thoughts on symbols in mental representation, given today's advanced neural nets, w/ @cocosci_lab @RTomMcCoy @Brown_NLP @TaylorWWebb Major open Q: Can neural nets learn symbol-related abilities *without* training on massive, unrealistic data generated from symbolic systems?
@RTomMcCoy
Tom McCoy
4 months
🤖🧠 NEW PAPER ON COGSCI & AI 🧠🤖 Recent neural networks capture properties long thought to require symbols: compositionality, productivity, rapid learning So what role should symbols play in theories of the mind? For our answer...read on! Paper: https://t.co/VsCLpsiFuU 1/n
2
8
74
@jesseGlab
Jesse Goldberg
4 months
The world renowned Cornell Lab of Ornithology is seeking to hire a new director
1
7
12
@SaxeLab
Andrew Saxe
5 months
Excited to share new work @icmlconf by Loek van Rossem exploring the development of computational algorithms in recurrent neural networks. Hear it live tomorrow, Oral 1D, Tues 14 Jul West Exhibition Hall C: https://t.co/zsnSlJ0rrc Paper: https://t.co/aZs7VZuFNg (1/11)
openreview.net
Even when massively overparameterized, deep neural networks show a remarkable ability to generalize. Research on this phenomenon has focused on generalization within distribution, via smooth...
2
21
69
@dileeplearning
Dileep George
5 months
"A transformer trained on 10M solar systems nails planetary orbits. But it botches gravitational laws".
@keyonV
Keyon Vafa @ NeurIPS
5 months
Can an AI model predict perfectly and still have a terrible world model? What would that even mean? Our new ICML paper formalizes these questions One result tells the story: A transformer trained on 10M solar systems nails planetary orbits. But it botches gravitational laws 🧵
9
26
191
@ellisk_kellis
Kevin Ellis
6 months
New paper: World models + Program synthesis by @topwasu 1. World modeling on-the-fly by synthesizing programs w/ 4000+ lines of code 2. Learns new environments from minutes of experience 3. Positive score on Montezuma's Revenge 4. Compositional generalization to new environments
16
105
572
@Aaditya6284
Aaditya Singh
9 months
Transformers employ different strategies through training to minimize loss, but how do these tradeoff and why? Excited to share our newest work, where we show remarkably rich competitive and cooperative interactions (termed "coopetition") as a transformer learns. Read on 🔎⏬
1
23
134
@SaxeLab
Andrew Saxe
6 months
How does in-context learning emerge in attention models during gradient descent training? Sharing our new Spotlight paper @icmlconf: Training Dynamics of In-Context Learning in Linear Attention https://t.co/rr8T8ww5Kt Led by Yedi Zhang with @Aaditya6284 and Peter Latham
3
22
121
@srinituraga
Srini Turaga
8 months
This preprint is now published at @Nature. With current and former DeepMinders @yuvaltassa, Josh Merel, Matt Botvinick, and my @HHMIJanelia colleagues @vaxenburg, Igor Siwanowicz, @KristinMBranson, @MichaelBReiser, Gwyneth Card and more
@HHMIJanelia
HHMI | Janelia
8 months
🪰By infusing a virtual fruit fly with #AI, Janelia & @GoogleDeepMind scientists created a computerized insect that can walk & fly just like the real thing➡️ https://t.co/LqvOHKJr4F 🤖Read more about this work, first published in a #preprint in 2024➡️ https://t.co/QXsltdPxMb
6
25
96
@pfapostolides
Pierre Apostolides
8 months
(Plz repost) I’ve been receiving some good news lately and will be hiring at all levels to expand the lab. Please get in contact if you are interested in reinforcement learning, neural plasticity, circuit dynamics, and/or hearing rehabilitation. pierre.apostolides @ umich .edu
2
29
33
@ChenSun92
Chen Sun @NeurIPS2025
8 months
From our team at @GoogleDeepMind: we ask, as an LLM continues to learn, how do new facts pollute existing knowledge? (and can we control it) We investigate such hallucinations in our new paper, to be presented as Spotlight at @iclr_conf next week.
7
27
199
@ThomasMiconi
Thomas Miconi
8 months
New preprint! Intelligent creatures can solve truly novel problems (not just variations of previous problems), zero-shot! Why? They can "think" before acting, i.e. mentally simulate possible behaviors and evaluate likely outcomes How can we build agents with this ability?
3
12
49
@sunw37
Weinan Sun
8 months
Seize the chance to work with the best!
@ChongxiLai
Chongxi Lai
8 months
Thrilled and grateful to be part of Astera's inaugural residency cohort! We're also hiring—check out the details inside!
0
0
6
@kwanalexc
Alex Kwan 關進晞
8 months
Our latest study identifies a specific cell type and receptor essential for psilocybin’s long-lasting neural and behavioral effects 🍄🔬🧠🐁 Led by Ling-Xiao Shao and @ItsClaraLiao Funded by @NIH @NIMHgov 📄Read in @Nature - https://t.co/teA2IhCpXi 1/12
Tweet card summary image
nature.com
Nature - A pyramidal cell type and the 5-HT2A receptor in the medial frontal cortex have essential roles in psilocybin’s long-term drug action.
17
91
312