Ishika Agarwal @ NeurIPS '25 ⛱️
@wonderingishika
Followers
394
Following
3K
Media
22
Statuses
118
PhD Candidate @convai_uiuc | Data Efficient NLP
Joined April 2016
I'm excited to announce NN-CIFT got into @NeurIPSConf 2025 (featuring a fancy, new title)💃💃🌴Can't wait to discuss it with everyone!! Thank you @dilekhakkanitur and @convai_uiuc 🎉🎉
🚀Very excited about my new paper! NN-CIFT slashes data valuation costs by 99% using tiny neural nets (205k params, just 0.0027% of 8B LLMs) while maintaining top-tier performance!
3
7
59
I'll be in San Diego discussing NN-CIFT during Wednesday's poster session at 11🏖️🌊 Also, I'm very honored to share that I was selected as the Top Reviewer at @NeurIPSConf -- thank you so much!!
0
4
32
I'll be in San Diego discussing NN-CIFT during Wednesday's poster session at 11🏖️🌊 Also, I'm very honored to share that I was selected as the Top Reviewer at @NeurIPSConf -- thank you so much!!
0
4
32
So You Want to Be an Academic? A couple of years into your PhD, but wondering: "Am I doing this right?" Most of the advice is aimed at graduating students. But there's far less for junior folks who are still finding their academic path. My candid takes:
anandbhattad.github.io
Blog for junior PhD students on work, visibility, community, and sanity—long before the faculty job market is on the horizon.
16
100
749
Neural Networks for Learnable and Scalable Influence Estimation of Instruction Fine-Tuning Data by @wonderingishika @dilekhakkanitur TL;DR Replacing language models with small neural networks to quantify the informativeness of data Read more here:
🚀Very excited about my new paper! NN-CIFT slashes data valuation costs by 99% using tiny neural nets (205k params, just 0.0027% of 8B LLMs) while maintaining top-tier performance!
1
3
6
ConvAI had a great NeurIPS season with four accepted papers to the main conference🎉 Find all the authors in San Diego this December ☀️
1
5
14
Thrilled to announce our new survey that explores the exciting possibilities and troubling risks of computational persuasion in the era of LLMs 🤖💬 📄Arxiv: https://t.co/DoEMAWk3S9 💻 GitHub: https://t.co/W8aoFHqGFt
1
11
35
Our EMNLP'24 paper was just the beginning for multi-turn LLM tutors using a Socratic questioning approach! Encouraging to see @openai take a TreeInstruct-style approach for ChatGPT's new Study Mode! Read more about it here:
As ChatGPT becomes a go-to tool for students, we’re committed to ensuring it fosters deeper understanding and learning. Introducing study mode in ChatGPT — a learning experience that helps you work through problems step-by-step instead of just getting an answer.
1
2
8
Check out @priyanka_karg's cool work at ACL 🇦🇹🎉 If you didn't think she's cool enough already, she also has 3 more, equally cool acceptances at ACL 🤯🤯
I'm unfortunately not at #ACL2025 and so missed @priyanka_karg's presentation on tree-structured debates for comparative reasoning. Go talk to her about automating scientific discovery! Paper: https://t.co/BPlA6hFQwR w/ @wonderingishika
@dmguiuc
https://t.co/5jgJ9AZ7Wf
0
0
11
Hi all! I’m currently at #ACL25 presenting my four works– drop by or DM me on X/Whova if you want to chat about automating scientific discovery, helping models and humans critically think, and structured reasoning + creativity. #NLProc #ACL2025NLP
0
4
26
Behind every successful woman there’s a man. His name is Claude.
3
4
104
Would models know more about Indian food in Hindi and Turkey’s history in Turkish? Does the language of a question affect an LLM’s answer? ✨Yes!✨ @wonderingishika and I are excited to announce our newest preprint in which we explore “Language Specific Knowledge (LSK)”.
2
5
43
[7/7] We are grateful to our advisor @dilekhakkanitur, special shoutout to @saagnikkk and @priyanka_karg, and other @convai_uiuc lab members 🎉 📄Paper: https://t.co/Vo2qzm1JEZ 💻Code:
github.com
Language Specific Knowledge Extractor. Contribute to agarwalishika/LSKExtractor development by creating an account on GitHub.
0
1
8
[6/7] We also show which languages tend to map to the most topics: Chinese, German, French and Arabic seem to be common. Interestingly, gemma-3-1b has the most knowledge in Vietnamese! Would love to hear the community’s thoughts on this :)
1
0
5
[5/7] We test this out on a bunch of models and various datasets, and find an average 10% increase in model performance.
1
0
4
[4/7] Next, during testing, we use the LSK map to select which language to translate questions into. We find the semantically closest query to the test question in our map and translate the test question to that language for the final output.
1
0
4
[3/7] Our method is called LSKExtractor. First, we map out the LSK-to-language map. We notice that certain topics are best answered in certain languages.
1
0
4
[2/7] By carefully selecting the language of the question asked to the language model, we can improve model reasoning and accuracy by 10%. We also reveal some interesting relations between models and languages.
1
0
4
Would models know more about Indian food in Hindi and Turkey’s history in Turkish? Does the language of a question affect an LLM’s answer? ✨Yes!✨ @nbbozdag and I are excited to announce our newest preprint in which we explore “Language Specific Knowledge (LSK)”.
8
21
141
🚀Our ICML 2025 paper introduces "Premise-Augmented Reasoning Chains" - a structured approach to induce explicit dependencies in reasoning chains. By revealing the dependencies within chains, we significantly improve how LLM reasoning can be verified. 🧵[1/n]
1
23
73