
Julia Kempe
@KempeLab
Followers
2K
Following
185
Media
50
Statuses
128
Silver Professor at NYU Courant and CDS, Research Scientist at FAIR Research in Machine Learning, past in Quantum Computing & Finance. Posts my own.
Joined April 2024
RT @SimonsFdn: Our new Simons Collaboration on the Physics of Learning and Neural Computation will employ and develop powerful tools from #….
0
30
0
Check out the full paper here: Joint work with @is_labiad @mathuvu_ Matthieu Kowalski Marc Schoenauer @alessandroleite @teytaud !.@AIatMeta @NYUDataScience.(8/8).
arxiv.org
Gradient-based optimization is the workhorse of deep learning, offering efficient and scalable training via backpropagation. However, its reliance on large volumes of labeled data raises privacy...
0
1
7
Black-box Optimization for LLM Post-Training 💪.Strong non-vacuous generalization bounds ✔️.Privacy by design ✔️.Robustness to poisoning and data extraction ✔️.Improvement on reasoning benchmarks ✔️.@AIatMeta @NYUDataScience.(1/8)
1
11
20
RT @karen_ullrich: How would you make an LLM "forget" the concept of dog — or any other arbitrary concept? 🐶❓. We introduce SAMD & SAMI — a….
0
12
0
RT @arnal_charles: ❓How to balance negative and positive rewards in off-policy RL❓. In Asymmetric REINFORCE for off-Policy RL, we show that….
0
28
0
RT @NYUDataScience: Congrats to 37 CDS researchers — faculty, postdocs, and PhD students — who had papers accepted to ICLR 2025, including….
nyudatascience.medium.com
Thirty-seven CDS researchers had papers accepted to ICLR 2025, with several receiving Spotlight recognition.
0
2
0
RT @feeelix_feng: Check out our poster tmr at 10am at the ICLR Bidirectional Human-AI Alignment workshop! We cover how on-policy preference….
0
8
0
Here is to a next generation of AI-literate kids!.International AI Olympiad ML Researchers, you might appreciate the impressive syllabus. Do we have all the chops our kids are expected to have :) ? .
ioai-official.org
0
1
8
RT @arvysogorets: If in Singapore next week, come by our #ICLR2025 Spotlight poster for our recent study at @KempeLab unveiling how data pr….
0
2
0
Thanks to wonderful coauthors:.@dohmatobelvis @feeelix_feng @arvysogorets @KartikAhuja1 @arjunsubgraph @f_charton @yangpuPKU @galvardi @AIatMeta @NYUDataScience and the ICLR PC @iclr_conf for unanimously upholding standards of rigor and ethical conduct!.
1
0
5
Our ICLR25 papers:.🎉ICLR Spotlight: Strong Model Collapse 🎉ICLR Spotlight: DRoP: Distributionally Robust Data Pruning Beyond Model Collapse Flavors of Margin More details here soon!.
arxiv.org
We study the implicit bias of the general family of steepest descent algorithms with infinitesimal learning rate in deep homogeneous neural networks. We show that: (a) an algorithm-dependent...
3
17
154
RT @dohmatobelvis: We refused to cite the paper due to severe misconduct of the authors of that paper: plagiarism of our own prior work,….
0
29
0
It is a real delight to work with @dohmatobelvis and I encourage every student in search of excellent and rigorous mentorship to apply to his group!.
Papers accepted at @iclr_conf 2025: . - An Effective Theory of Bias Amplification - Pitfalls of Memorization - Strong Model Collapse - Beyond Model Collapse With @KempeLab,.
0
2
12