Pratyusha Sharma ✈️ NeurIPS Profile
Pratyusha Sharma ✈️ NeurIPS

@pratyusha_PS

Followers
4K
Following
495
Media
9
Statuses
210

Science ⇌ Deep Learning. Incoming Asst. Professor at NYU (@NYU_Courant & @NYUDataScience). Sr Research Scientist at @Microsoft. PhD @MIT_CSAIL.

Cambridge, MA
Joined September 2013
Don't wanna be here? Send us removal request.
@NYUDataScience
NYU Center for Data Science
2 days
Welcoming new faculty to CDS! This fall, we welcomed Greg Durrett (@gregd_nlp). In Fall 2026, we'll welcome Jaume Vives-i-Bastida (@jaumevivesb), Zongyi Li (@zongyili_nyu), Juan Carlos Perdomo Silva, and Pratyusha Sharma (@pratyusha_PS).
0
7
50
@pratyusha_PS
Pratyusha Sharma ✈️ NeurIPS
21 days
I will be at Microsoft Research NYC this year—if you’re looking for spring/summer internships, want to chat about research, hit me up!
5
3
62
@pratyusha_PS
Pratyusha Sharma ✈️ NeurIPS
21 days
HUGE thanks to my PhD advisors @jacobandreas and Antonio Torralba for being the most wonderful advisors one can possibly have and taking this journey together! Also big thanks to countless other people, colleagues, friends and family who’ve supported and guided me along the
1
0
33
@pratyusha_PS
Pratyusha Sharma ✈️ NeurIPS
21 days
📢 Some big (& slightly belated) life updates! 1. I defended my PhD at MIT this summer! 🎓 2. I'm joining NYU as an Assistant Professor starting Fall 2026, with a joint appointment in Courant CS and the Center for Data Science. 🎉 🔬 My lab will focus on empirically studying
102
91
2K
@pratyusha_PS
Pratyusha Sharma ✈️ NeurIPS
2 months
Super excited about @ReeceShuttle’s new paper! 1️⃣ LoRA forgets less—and even better, forgetting from LoRA is reversible with a dead-simple intervention! ✨ 2️⃣ You might think “if LoRA forgets less than full fine-tuning, it’s better for continual learning,” right? Nope!🚫
@ReeceShuttle
Reece Shuttleworth
2 months
🧵 LoRA vs full fine-tuning: same performance ≠ same solution. Our NeurIPS ‘25 paper 🎉shows that LoRA and full fine-tuning, even when equally well fit, learn structurally different solutions and that LoRA forgets less and can be made even better (lesser forgetting) by a simple
1
20
238
@ReeceShuttle
Reece Shuttleworth
2 months
🧵 LoRA vs full fine-tuning: same performance ≠ same solution. Our NeurIPS ‘25 paper 🎉shows that LoRA and full fine-tuning, even when equally well fit, learn structurally different solutions and that LoRA forgets less and can be made even better (lesser forgetting) by a simple
18
245
2K
@TEDAISF
TEDAI San Francisco
8 months
🐋 Can AI help us understand whales — and ourselves? 📷 New TED Talk recorded at @TEDAISF is live! @MIT researcher @pratyusha_PS explores how machine learning is decoding the language of sperm whales — opening new frontiers in AI, linguistics & nature. https://t.co/R7nEwf6Y9c
1
9
27
@nlp_mit
MIT NLP
9 months
Hello everyone! We are quite a bit late to the twitter party, but welcome to the MIT NLP Group account! follow along for the latest research from our labs as we dive deep into language, learning, and logic 🤖📚🧠
27
54
550
@ShikharMurty
Shikhar
1 year
Super excited to share NNetnav : A new method for generating complex demonstrations to train web agents—driven entirely via exploration! Here's how we’re building useful browser agents, without expensive human supervision: 🧵👇 Code: https://t.co/8USWMFSrIF Preprint:
4
40
134
@TEDAISF
TEDAI San Francisco
1 year
Presenting now at #TEDAI 2024 stage: Pratyusha Sharma @pratyusha_PS, Researcher at @MIT. How do we understand the communication system of another species and possibly communicate back? #TEDAI #TEDAI2024 #AI #TEDTALK
0
7
26
@gupta_abhinav_
Abhinav Gupta
1 year
Excited to announce @SkildAI. In a thrilling year with @pathak2206 and SKILD AI team, we have scaled and built a foundation model that is robust and show emergent capabilities. Truly excited about what comes next! Special thanks to @RashiShrivast18 and our investors.
@pathak2206
Deepak Pathak
1 year
Thrilled to announce @SkildAI! Over the past year, @gupta_abhinav_ and I have been working with our top-tier team to build an AI foundation model grounded in the physical world. Today, we’re taking Skild AI out of stealth with $300M in Series A funding: https://t.co/1kXo7NrnVr
9
10
143
@xiaolonw
Xiaolong Wang
1 year
Cannot believe this finally happened! Over the last 1.5 years, we have been developing a new LLM architecture, with linear complexity and expressive hidden states, for long-context modeling. The following plots show our model trained from Books scale better (from 125M to 1.3B)
20
268
2K
@MIT_CSAIL
MIT CSAIL
1 year
A picture is worth a thousand words, but can a LLM get the picture if it has never seen images before? 🧵 MIT CSAIL researchers quantify how much visual knowledge LLMs (trained purely on text) have. The visual aptitude of the language model is tested by its ability to write,
10
70
256
@alexrives
Alex Rives
1 year
We have trained ESM3 and we're excited to introduce EvolutionaryScale. ESM3 is a generative language model for programming biology. In experiments, we found ESM3 can simulate 500M years of evolution to generate new fluorescent proteins. Read more: https://t.co/iAC3lkj0iV
139
808
3K
@belindazli
Belinda Li
1 year
As the world changes, documents go out of date. How can we adapt RAG systems to a stream of changing world data? We introduce ERASE, a way of updating and propagating facts within knowledge bases, and CLARK, a dataset targeting these update problems https://t.co/xNyv9ePRw8 1/
4
32
127
@davidbau
David Bau
2 years
The National Deep Inference Fabric #NDIF, an @NSF-funded AI research infrastructure project, is awarding 2024 **Summer Engineering Fellowships** in Boston. These are summer visiting positions, for current or recent PhD or undergrads, including stipend, travel and housing costs.
1
28
60
@seungwookh
Seungwook Han
2 years
🚀 Stronger, simpler, and better! 🚀 Introducing Value Augmented Sampling (VAS) - our new algorithm for LLM alignment and personalization that outperforms existing methods!
4
34
130
@bjkingape
Barbara J King
2 years
Exciting: Scientists including @pratyusha_PS @sgero say sperm whales "use a much richer set of sounds than previously known, which they called a 'sperm whale phonetic alphabet.'" #scicomm by @CarlZimmer https://t.co/QOyGJifi5t #whales #cetaceans #animalcommunication #animals
Tweet card summary image
nytimes.com
Sperm whales rattle off pulses of clicks while swimming together, raising the possibility that they’re communicating in a complex language.
1
2
15
@jacobandreas
Jacob Andreas
2 years
And for something completely different: our paper on the combinatorial structure of sperm whale vocalizations (led by @pratyusha_PS, in collab w @ProjectCETI) is out in Nature Comms today! https://t.co/eBlWOj5mb4 https://t.co/dP6lWbtAbM
Tweet card summary image
nytimes.com
Sperm whales rattle off pulses of clicks while swimming together, raising the possibility that they’re communicating in a complex language.
2
15
31
@MIT_CSAIL
MIT CSAIL
2 years
If sperm whales could talk, what would they say? New research on their communication reveals a complex combinatorial system that challenges our understanding of animal vocalizations. By analyzing 8,719 whale click sounds called “codas”, using machine learning, MIT CSAIL &
5
45
117