
Jaydeep Borkar
@JaydeepBorkar
Followers
818
Following
7K
Media
44
Statuses
1K
visiting researcher @AIatMeta š¦ & phd-ing @KhouryCollege; organizer @trustworthy_ml prev @MITIBMLab. studying language models & safety.
Manhattan, NYC
Joined October 2017
What happens if we fine-tune an LLM on more PII? We find that PII that wasnāt previously extracted gets extracted after fine-tuning on *other* PII. This could have implications for earlier seen data (e.g. during post-training or further fine-tuning). š§µ.
3
27
132
RT @niloofar_mire: We bypassed Veo3's copyright guardrails for music-video generation w phonetically similar gibberish lyrics!. āmomās spagā¦.
0
7
0
RT @l2m2_workshop: L2M2 will be tomorrow at VIC, room 1.31-32! We hope you will join us for a day of invited talks, orals, and posters on Lā¦.
sites.google.com
Program
0
5
0
RT @lasha_nlp: Super thrilled that HALoGEN, our study of LLM hallucinations and their potential origins in training data, received an Outstā¦.
0
22
0
RT @lasha_nlp: āļø I'm in Vienna for #ACL2025NLP! . Would love to meet and chat about training data, factuality, transparency, doing a PhD iā¦.
0
8
0
RT @johntzwei: If you're interested in law/policy topics at #ACL2025NLP, on copyright or more, please reach out! Would be happy to chat witā¦.
0
1
0
Presenting this today at #ACL2025! Stop by if youāre interested in chatting about memorization and privacy! :) . Hall X5 Board #209 10:30-12.Hall X4 Board #259 16-17:30
What happens if we fine-tune an LLM on more PII? We find that PII that wasnāt previously extracted gets extracted after fine-tuning on *other* PII. This could have implications for earlier seen data (e.g. during post-training or further fine-tuning). š§µ.
0
4
21
RT @l2m2_workshop: L2M2 is happening this Friday in Vienna at @aclmeeting #ACL2025NLP! We look forward to the gathering of memorization reā¦.
sites.google.com
Program
0
12
0
RT @niloofar_mire: Iām gonna be recruiting students thru both @LTIatCMU (NLP) and @CMU_EPP (Engineering and Public Policy) for fall 2026!ā¦.
0
50
0
RT @JaydeepBorkar: What happens if we fine-tune an LLM on more PII? We find that PII that wasnāt previously extracted gets extracted afterā¦.
0
27
0
Excited to be attending ACL in Vienna next week! Iāll be co-presenting a poster with @niloofar_mire on our recent PII memorization work on July 29 16:00-17:30 Session 10 Hall 4/5 (& at @l2m2_workshop)! . If you would like to chat memorization/privacy/safety/, please reach out :).
What happens if we fine-tune an LLM on more PII? We find that PII that wasnāt previously extracted gets extracted after fine-tuning on *other* PII. This could have implications for earlier seen data (e.g. during post-training or further fine-tuning). š§µ.
0
3
19
RT @lucy3_li: I'm sadly not at #IC2S2 š, but I will be at #ACL2025 in Vienna āļø next week!! . Please spread the word that I'm recruiting prā¦.
lucy3.notion.site
Iām recruiting PhD students who will begin their degree in Fall 2026! I am an incoming assistant professor at Wisconsin-Madisonās Computer Sciences department, and my research focuses on natural...
0
16
0
always very grateful for all my incredibly kind collaborators/advisors/research friends i get to talk and work with! especially if youāre someone just entering the field, these kinds of interactions can really shape how you see things & how you feel about yourself!.
There are many great researchers out there. But the ones that really stand out to me are the ones who are also kind, even when they don't need to be.
0
0
5
RT @kamalikac: For those in London (unfortunately I am not :)).
0
2
0
RT @savvyRL: We are raising $20k (which amounts to $800 per person), to cover their travel and lodging to Kigali, Rwanda in August, from eiā¦.
donorbox.org
Watch our fundraiser videoĀ to meet us, hear our stories, and learn what your support makes possible.Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā Ā We're raising $20,000 to send 25 early-career AI researchers from Nigeria and Ghana to...
0
29
0
RT @johntzwei: Are you a researcher, trying to build a small GPU cluster? Did you already build one, and it sucks?. I manage USC NLPās GPUā¦.
0
11
0
RT @LoubnaBenAllal1: Introducing SmolLM3: a strong, smol reasoner!. > SoTA 3B model.> dual mode reasoning (think/no_think).> long context,ā¦.
0
217
0
now a part of @Meta superintelligence labs! exciting times!.
Very excited to be joining @AIatMeta GenAI as a Visiting Researcher starting this June in New York City!š½ Iāll be continuing my work on studying memorization and safety in language models. If youāre in NYC and would like to hang out, please message me :)
1
0
9
RT @niloofar_mire: šŖWe made a 1B Llama BEAT GPT-4o by. making it MORE private?!. LoCoMo results:.šGPT-4o: 80.6% .š1B Llama + GPT-4o (privā¦.
0
45
0
RT @BlancheMinerva: Two years in the making, we finally have 8 TB of openly licensed data with document-level metadata for authorship attriā¦.
0
69
0