
Julia Kempe
@KempeLab
Followers
2K
Following
169
Media
47
Statuses
118
Silver Professor at NYU Courant and CDS, Research Scientist at FAIR Research in Machine Learning, past in Quantum Computing & Finance. Posts my own.
Joined April 2024
RT @arnal_charles: ❓How to balance negative and positive rewards in off-policy RL❓. In Asymmetric REINFORCE for off-Policy RL, we show that….
0
26
0
RT @NYUDataScience: Congrats to 37 CDS researchers — faculty, postdocs, and PhD students — who had papers accepted to ICLR 2025, including….
0
1
0
RT @feeelix_feng: Check out our poster tmr at 10am at the ICLR Bidirectional Human-AI Alignment workshop! We cover how on-policy preference….
0
7
0
RT @arvysogorets: If in Singapore next week, come by our #ICLR2025 Spotlight poster for our recent study at @KempeLab unveiling how data pr….
0
1
0
Thanks to wonderful coauthors:.@dohmatobelvis @feeelix_feng @arvysogorets @KartikAhuja1 @arjunsubgraph @f_charton @yangpuPKU @galvardi @AIatMeta @NYUDataScience and the ICLR PC @iclr_conf for unanimously upholding standards of rigor and ethical conduct!.
1
0
4
RT @dohmatobelvis: We refused to cite the paper due to severe misconduct of the authors of that paper: plagiarism of our own prior work,….
0
27
0
It is a real delight to work with @dohmatobelvis and I encourage every student in search of excellent and rigorous mentorship to apply to his group!.
Papers accepted at @iclr_conf 2025: . - An Effective Theory of Bias Amplification - Pitfalls of Memorization - Strong Model Collapse - Beyond Model Collapse With @KempeLab,.
0
2
12
RT @feeelix_feng: You think on-policy sampling gives the best reward models? Think again! 🔥.Our finding: Even with on-policy data, reward m….
0
39
0
PILAF (Policy-Interpolated Learning for Aligned Feedback): our response sampling scheme that provably aligns LLM preference learning w maximizing the underlying oracle reward! @feeelix_feng @ArielKwiatkowsk @KunhaoZ @YaqiDuanPKU.@AIatMeta @NYUDataScience
0
10
77
RT @NYUDataScience: AI bias grows when data pruning favors simplicity over fairness. CDS PhD graduate Artem Vysogorets (@arvysogorets) and….
0
2
0
Model collapse & AI data: Happy to work with BBC Radio4 in this podcast Impressed by thoughtful questions and moderation of @aleksk & @Kevin_Fong .Features our work with @dohmatobelvis @feeelix_feng @yangpuPKU @f_charton .at @NYUDataScience and @AIatMeta.
0
5
14
Submit to our New Frontiers in Associative Memories workshop @iclr_conf. New architectures & algorithms, memory-augmented LLMs, energy-based models, Hopfield networks, assoc. memory & diffusion. .Organizing with @DimaKrotov et al.
1
12
41
Thanks to co-author & colleague @f_charton.for this collaboration (Emergent Properties w Repeated Examples) that won the Debunking Challenge at SciforDL @scifordl #NeurIPS2024 . We both learned that - at least sometimes - one epoch is *not* all you need.@AIatMeta.@NYUDataScience.
One epoch is not all you need! Our paper, Emergent properties with repeated examples, with.@KempeLab, won the NeurIPS24 Debunking Challenge, organized by the Science for Deep Learning workshop,.@scifordl.
0
1
19
RT @scifordl: Congratulation to the winners of our debunking challenge @f_charton and @KempeLab for "Emergent properties with repeated exa….
0
4
0
If you - like many - believe that using more (good) data is better than repeating on less - guess again! And come to our poster "Emergent Properties. " at the SciForDL @scifordl workshop West Mtg Room 205-207 this afternoon (after 4pm)! .With @f_charton @AIatMeta @NYUDataScience
2
16
117
Interested in implicit bias of algorithms like Adam and Shampoo? Pass by our poster "Flavors of Margin" today (Sat) at the @M3LWorkshop .4-5pm, East Meeting Room 1-3.with Nikolaos Tsilivis & @galvardi @NYUDataScience @AIatMeta @WeizmannScience .based on
0
3
17
For those into jailbreaking LLMs: our poster "Mission Impossible" today shows the fundamental limits of LLM alignment - and improved ways to go about it, nonetheless. With @karen_ullrich & Jingtong Su .#2302 11am - 2pm Poster Session 3 East.@NYUDataScience @AIatMeta #NeurIPS2024
2
3
36