Atharva Kulkarni Profile
Atharva Kulkarni

@athrvkk

Followers
176
Following
1K
Media
9
Statuses
219

CS PhD @CSatUSC | Prev - @SimonsInstitute, @LTIatCMU, @Apple, @lcs2lab | #NLProc Research

Los Angeles, CA
Joined May 2021
Don't wanna be here? Send us removal request.
@athrvkk
Atharva Kulkarni
3 days
RT @simi_97k: So excited to be one of the five winners of the Imminent Translated Research Grants! This is for work done with @OpenNLPLabs….
0
10
0
@athrvkk
Atharva Kulkarni
4 days
RT @BrihiJ: Our poster slot got moved, so I'll be talking more about this work and in general about personalizing natural language explanat….
0
8
0
@athrvkk
Atharva Kulkarni
5 days
RT @xinyue_cui411: Can we create effective watermarks for LLM training data that survive every stage in real-world LLM development lifecycl….
0
7
0
@athrvkk
Atharva Kulkarni
14 days
RT @bhavya_vasudeva: Swing by HiLD at #ICML2025 today to know more about our (ongoing) work on✨generalization of Shampoo/Muon vs. GD✨🔎. ✒️W….
0
2
0
@athrvkk
Atharva Kulkarni
16 days
RT @AdtRaghunathan: I will be at #ICML2025 🇨🇦 from Wednesday through Saturday. My students have a lot of exciting papers - check them out….
0
18
0
@athrvkk
Atharva Kulkarni
16 days
RT @IanMagnusson: Come chat with us at our ICML poster tomorrow! .📈 Learn about the best ways to evaluate for base language model developme….
0
13
0
@athrvkk
Atharva Kulkarni
17 days
RT @zhang_muru: I'm at #ICML2025, presenting Ladder-Residual ( at the first poster session tomorrow morning (7/15 1….
0
1
0
@athrvkk
Atharva Kulkarni
19 days
RT @michahu8: 📢 today's scaling laws often don't work for predicting downstream task performance. For some pretraining setups, smooth and p….
0
37
0
@athrvkk
Atharva Kulkarni
21 days
RT @johntzwei: Are you a researcher, trying to build a small GPU cluster? Did you already build one, and it sucks?. I manage USC NLP’s GPU….
0
11
0
@athrvkk
Atharva Kulkarni
1 month
RT @_vaishnavh: Wrote my first blog post! I wanted to share a powerful yet under-recognized way to develop emotional maturity as a research….
0
14
0
@athrvkk
Atharva Kulkarni
1 month
RT @mattf1n: I didn't believe when I first saw, but:.We trained a prompt stealing model that gets >3x SoTA accuracy. The secret is represen….
0
23
0
@athrvkk
Atharva Kulkarni
3 months
RT @LTIatCMU: Notice our new look? We're thrilled to unveil our new logo – representing our vision, values, and the future ahead. Stay tune….
0
8
0
@athrvkk
Atharva Kulkarni
3 months
RT @MOSS_workshop: Announcing the 1st Workshop on Methods and Opportunities at Small Scale (MOSS) at @icmlconf 2025!. 🔗Website: https://t.c….
0
13
0
@athrvkk
Atharva Kulkarni
3 months
🙌🥳Had great fun doing this during my summer internship with folks from Apple (Yuan Zhang, Joel Ruben Antony Moniz, Xiou Ge, Bo-Hsiang Tseng, Dhivya Piraviperumal, Hong Yu) and USC (Swabha Swayamdipta). Looking forward to the feedback! 🙂.#LLMs #NLProc. (7/n).
0
0
0
@athrvkk
Atharva Kulkarni
3 months
🚫Bottom line: There’s no single metric that captures hallucinations reliably across the board. 🎯Our work highlights the need for robust, context-aware, and generalizable hallucination detection tools as a prerequisite to meaningful mitigation. (6/n).
1
0
0
@athrvkk
Atharva Kulkarni
3 months
🧐Focusing on faithfulness and factuality errors in QA and dialogue tasks, we study diverse metrics spanning:.1. Syntactic and semantic similarity.2. Natural language inference.3. Multi-step question answering pipelines.4. Custom-trained models.5. SOTA LLMs as judge. (3/n).
1
0
0
@athrvkk
Atharva Kulkarni
3 months
🤔Despite a surge in research on hallucination mitigation, few ask the critical questions:.1. Are the metrics capturing the hallucinations effectively?.2. Do they align with each other and the human notion of hallucination?.3. Do they generalize across different settings?. (2/n).
1
0
0