
Javier Abad Martinez
@JavierAbadM
Followers
91
Following
50
Media
3
Statuses
20
PhD Student @ETH_AI_Center | Interested in AI Safety, Privacy & Causal Inference
Zurich
Joined September 2022
RT @FannyYangETH: Register now (first-come first-served) for the "Math of Trustworthy ML workshop" at #LagoMaggiore, Switzerland, Oct 12-16….
0
21
0
RT @yaxi_hu: What if learning and unlearning happen simultaneously, with unlearning requests between updates? . Check out our work on onlin….
0
14
0
RT @AmartyaSanyal: Advertising an Open Postdoc position in learning theory/ privacy/ robustness/ unlearning or any related topics with me a….
0
3
0
RT @javirandor: Presenting 2 posters today at ICLR. Come check them out!. 10am ➡️ #502: Scalable Extraction of Training Data from Aligned,….
0
3
0
RT @pdebartols: Landed in Singapore for #ICLR—excited to see old & new friends! I’ll be presenting:. 📌 RAMEN @ Main Conference on Saturday….
0
4
0
Presenting our work at #ICLR this week! Come by the poster or oral session to chat about copyright protection and AI/LLM safety. 📌 𝐏𝐨𝐬𝐭𝐞𝐫: Friday, 10 a.m. – 12.30 p.m. | Booth 537.📌 𝐎𝐫𝐚𝐥: Friday, 3.30 – 5 p.m. | Room Peridot. @FraPintoML @DonhauserKonst @FannyYangETH.
LLMs accidentally spitting out copyrighted content?.We’ve got a fix. Our paper on CP-Fuse—a method to prevent LLMs from regurgitating protected data—got accepted as an Oral at #ICLR2025!. 👇Check it out! .📄 🤖
0
1
6
RT @AmartyaSanyal: Very shortly at @RealAAAI , @alexandrutifrea and I will be giving a Tutorial on the impact of Quality and availability o….
0
5
0
LLMs accidentally spitting out copyrighted content?.We’ve got a fix. Our paper on CP-Fuse—a method to prevent LLMs from regurgitating protected data—got accepted as an Oral at #ICLR2025!. 👇Check it out! .📄 🤖
(1/5) LLMs risk memorizing and regurgitating training data, raising copyright concerns. Our new work introduces CP-Fuse, a strategy to fuse LLMs trained on disjoint sets of protected material. The goal? Preventing unintended regurgitation 🧵. Paper:
0
1
17
RT @FannyYangETH: Eager to hear feedback from anyone who applies causal inference about this recent work with this amazing group of people….
0
3
0
RT @pdebartols: Looking for a more efficient way to estimate treatment effects in your randomized experiment?. We introduce H-AIPW: a novel….
0
5
0
RT @dmitrievdaniil7: Excited to present at #NeurIPS2024 our work on robust mixture learning!. How hard is mixture learning when (a lot of)….
0
1
0
(6/5) A big shoutout to the team: @DonhauserKonst,@FraPintoML and @FannyYangETH—thanks for the fantastic collaboration!. Paper: Code:
0
0
0
RT @ETH_AI_Center: Thrilled to share our 8 conference paper contributions to @icmlconf 2024 next week. Congrats to our doctoral fellows, po….
0
7
0
RT @pdebartols: Come to our AISTATS poster (#96) this afternoon (5-7pm) to learn more about hidden confounding!.
0
1
0
RT @pdebartols: Worried that hidden confounding stands in the way of your analysis? We propose a new strategy when a small RCT is available….
0
2
0