Max David Gupta Profile
Max David Gupta

@MaxDavidGupta1

Followers
72
Following
174
Media
2
Statuses
84

CS @Princeton Math @Columbia

Joined March 2020
Don't wanna be here? Send us removal request.
@MaxDavidGupta1
Max David Gupta
13 days
I started writing on Substack! First piece is on how breaking the IID assumptions while training neural networks leads to different learned representational structures. Will try to be posting weekly with short-form updates from experiences and experiments I run at @cocosci_lab
0
0
1
@MaxDavidGupta1
Max David Gupta
13 days
I started writing on Substack! First piece is on how breaking the IID assumptions while training neural networks leads to different learned representational structures. Will try to be posting weekly with short-form updates from experiences and experiments I run at @cocosci_lab
0
0
0
@sreejan_kumar
Sreejan Kumar
1 month
I'm excited to share that my new postdoctoral position is going so well that I submitted a new paper at the end of my first week! A thread below
@biorxiv_neursci
bioRxiv Neuroscience
1 month
Sensory Compression as a Unifying Principle for Action Chunking and Time Coding in the Brain https://t.co/QTNBYaYmwo #biorxiv_neursci
1
11
69
@MaxDavidGupta1
Max David Gupta
3 months
P1-G-70 at 1PM Salon 8!
0
0
0
@MaxDavidGupta1
Max David Gupta
3 months
I’ll be sharing this work at #CogSci2025 ! Send me a message if you’d like to meet
@MaxDavidGupta1
Max David Gupta
7 months
Happy to share my first first-authored work at @cocosci_lab. Determining sameness or difference between objects is utterly trivial to humans, but surprisingly inaccessible to AI. Meta-learning can help neural networks overcome this barrier. Link: https://t.co/ID8DfXOImj (1/5)
1
0
0
@MaxDavidGupta1
Max David Gupta
3 months
Mech interp is great for people who were good at calc, interested in the brain, but too squeamish to become neurosurgeons? Sign me up.
0
0
1
@MaxDavidGupta1
Max David Gupta
3 months
Jung: "Never do human beings speculate more, or have more opinions, than about things which they do not understand" This rings of truth for me today - I'm grateful to be a part of institutions that prefer the scientific method to wanton speculation
1
0
0
@MaxDavidGupta1
Max David Gupta
3 months
Love this take on RL in day-to-day life (mimesis is such a silent killer):
@_jasonwei
Jason Wei
3 months
Becoming an RL diehard in the past year and thinking about RL for most of my waking hours inadvertently taught me an important lesson about how to live my own life. One of the big concepts in RL is that you always want to be “on-policy”: instead of mimicking other people’s
0
0
0
@MaxDavidGupta1
Max David Gupta
3 months
ICML is everyone's chance to revisit the days we peaked in HS multi-variable calc
0
0
1
@emollick
Ethan Mollick
3 months
I am starting to think sycophancy is going to be a bigger problem than pure hallucination as LLMs improve. Models that won’t tell you directly when you are wrong (and justify your correctness) are ultimately more dangerous to decision-making than models that are sometimes wrong.
211
433
3K
@MaxDavidGupta1
Max David Gupta
3 months
Other areas I’ve been thinking about a lot recently : differential geometry and connections to manifold learning in neural networks, and Meta reinforcement learning.
0
0
1
@MaxDavidGupta1
Max David Gupta
3 months
I'll be attending ICML next week. Interested in chatting about meta-learning, concept acquisition, or relational reasoning in humans and machines? Send me a DM or drop by my poster at the high-dimensional learning dynamics workshop (HiLD) on July 18! https://t.co/kvKMCCSDdP
Tweet card summary image
sites.google.com
18 July, ICML 2025 Vancouver, BC, Canada
1
0
3
@nataliyakosmyna
Nataliya Kosmyna, Ph.D
4 months
𝐍𝐨, 𝐲𝐨𝐮𝐫 𝐛𝐫𝐚𝐢𝐧 𝐝𝐨𝐞𝐬 𝐧𝐨𝐭 𝐩𝐞𝐫𝐟𝐨𝐫𝐦 𝐛𝐞𝐭𝐭𝐞𝐫 𝐚𝐟𝐭𝐞𝐫 𝐋𝐋𝐌 𝐨𝐫 𝐝𝐮𝐫𝐢𝐧𝐠 𝐋𝐋𝐌 𝐮𝐬𝐞. Check our paper: "Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task" : https://t.co/28T4XnBlnj
14
60
126
@RTomMcCoy
Tom McCoy
5 months
🤖🧠Paper out in Nature Communications! 🧠🤖 Bayesian models can learn rapidly. Neural networks can handle messy, naturalistic data. How can we combine these strengths? Our answer: Use meta-learning to distill Bayesian priors into a neural network! https://t.co/vmOkilhMxJ 1/n
13
87
443
@MaxDavidGupta1
Max David Gupta
6 months
can ideas from hard negative mining from contrastive learning play into generating valid counterfactual reasoning paths? or am I way off base? curious to hear what people think
1
0
2
@MaxDavidGupta1
Max David Gupta
6 months
The developmental perspective is also incredibly important - we grow into our worlds and actively fashion ourselves in them - this requires a feeling of ownership and organic adaptivity that so many interviewed here feel empowers them, and which is often taken away by drugs.
0
0
0
@MaxDavidGupta1
Max David Gupta
6 months
What I like about the article is its shift from a categorical to a situational model of ADHD - we all face environmental factors that make it hard for us to focus, but bringing awareness to those factors is often a better cure than blindly assuming those factors exist universally
1
0
0
@MaxDavidGupta1
Max David Gupta
6 months
The ultimate question: is your ADHD controlling you or is big pharma controlling your attention? I'd like to see longer term studies about Ritalin reliance, to really verify my assumptions here, but I worry the picture won't be rosy.
0
0
0
@MaxDavidGupta1
Max David Gupta
6 months
I've also argued that stigmatization of ADHD leads to people feeling locked in to cyclic patterns of understanding the issue but feeling like control over it is out of their hands, effectively outsourcing their ownership to big pharma even more quickly.
0
0
0