irisaparina Profile Banner
Irina Saparina Profile
Irina Saparina

@irisaparina

Followers
203
Following
442
Media
14
Statuses
46

PhD student at the University of Edinburgh

Edinburgh, Scotland
Joined March 2014
Don't wanna be here? Send us removal request.
@irisaparina
Irina Saparina
9 months
🔥 New Preprint! 🔥 How should LLMs handle ambiguous questions in text-to-SQL semantic parsing? 👉🏼 Disambiguate First, Parse Later! We propose a plug-and-play approach that explicitly disambiguates the question 💬 Paper: https://t.co/wtGBpGaElb
1
9
24
@irisaparina
Irina Saparina
1 month
🎤 @serj_troshin will present this work November 9th at 09:55-10:30 (poster lightning talk) and 11:00 - 12:15 (poster), Room A207 📄 Paper: https://t.co/4MRhKuVjxs 💻 Code: https://t.co/vbMbhOl5oN 3/3
Tweet card summary image
github.com
Asking a Language Model for Diverse Responses. Contribute to serjtroshin/ask4diversity development by creating an account on GitHub.
0
0
3
@irisaparina
Irina Saparina
1 month
We compare these strategies on: ✔️ Quality & efficiency ✔️ Lexical diversity ✔️ Computational flow diversity (a new metric we propose that's more suitable for reasoning tasks like math) Key finding: non-independent sampling boosts diversity without sacrificing quality! ✨ 2/3
1
0
2
@irisaparina
Irina Saparina
1 month
Excited to share our paper "Asking a Language Model for Diverse Responses" accepted at #UncertaiNLP workshop at #EMNLP2025 When generating diverse responses, most use parallel sampling. We study non-independent alternatives: enumeration & iterative sampling. w/ @serj_troshin 👇
1
1
11
@agostina_cal
Agostina Calabrese 🦋
1 month
At #EMNLP2025 to present the last chapter of my PhD 🐼 Let's talk #HateSpeech detection, generalisation and NLP safety at my poster: 📆tomorrow 🕟4.30pm Look for the circus-themed poster 🎪🤸🏻‍♀️ Work with @tomsherborne @bjoernross and @mlapata at @EdinburghNLP + @cohere
0
6
31
@aadhikariii
Ashutosh Adhikari
1 month
Excited to share my first work as a PhD student at @EdinburghNLP that I will be presenting at EMNLP! RQ1: Can we achieve scalable oversight across modalities via debate? Yes! We show that debating VLMs lead to better model quality of answers for reasoning tasks.
1
7
13
@EdinburghNLP
EdinburghNLP
4 months
Represent! ✌️
@PMinervini
Pasquale Minervini 🇪🇺 🇬🇧 🏴󠁧󠁢󠁳󠁣󠁴󠁿
5 months
The amazing folks at @EdinburghNLP will be presenting a few papers at ACL 2025 (@aclmeeting); if you're in Vienna, touch base with them! Here are the papers in the main track 🧵
0
9
58
@stanfordnlp
Stanford NLP Group
5 months
At the @aclmeeting Panel on Generalization of NLP Models, Mirella @mlapata argues that the real problem isn’t generalization but how to get models that learn and adapt in real time. That’s a pretty good requirement for true intelligence!
2
8
80
@irisaparina
Irina Saparina
5 months
Well deserved 👏
@aclmeeting
ACL 2026
5 months
📚Tom Sherborne: Modeling Cross-lingual Transfer for Semantic Parsing Sherborne’s dissertation developsmethods for cross-lingual transfer into low-resource languages, demonstrating their effectiveness in the context of semantic parsing for integration with database APIs.
0
0
2
@irisaparina
Irina Saparina
5 months
Interested in text-to-SQL or ambiguity? Curious how we can turn LLM overconfidence into an advantage? Let's talk! Come say hi 👋
0
0
1
@irisaparina
Irina Saparina
5 months
Excited to present our Findings paper at #ACL2025 in Vienna next week! 📄Disambiguate First, Parse Later: Generating Interpretations for Ambiguity Resolution in Semantic Parsing 🗓️ Tue, July 29, 10:30–12:00 📍 Hall 4/5, Session 7
@irisaparina
Irina Saparina
9 months
🔥 New Preprint! 🔥 How should LLMs handle ambiguous questions in text-to-SQL semantic parsing? 👉🏼 Disambiguate First, Parse Later! We propose a plug-and-play approach that explicitly disambiguates the question 💬 Paper: https://t.co/wtGBpGaElb
1
1
13
@AlexAag1234
Alex Gurung
8 months
Preprint: Can we learn to reason for story generation (~100k tokens), without reward models? Yes! We introduce an RLVR-inspired reward paradigm VR-CLI that correlates with human judgements of quality on the 'novel' task of Next-Chapter Prediction. Paper: https://t.co/eO0nUHzRjG
7
50
326
@irisaparina
Irina Saparina
9 months
(4/5) Experiments on AmbiQT & Ambrosia in both in-domain and out-of-domain settings show: 📌 SFT easily overfits 📌 Disambiguate first, parse later works better! 📌 Infilling improves performance 📌 Ambiguity remains a hard, low-resource problem!
1
0
2
@irisaparina
Irina Saparina
9 months
(3/5) Advantages: ✅ Interpretations act as transparent, explainable reasoning steps ✅ Plug-and-play approach ✅ Handles both ambiguous & unambiguous queries
1
0
1
@irisaparina
Irina Saparina
9 months
(2/5) LLMs struggle with ambiguity and default to a single preferred interpretation. We turn this bias into an advantage! Our approach: 1️⃣ Generate default NL interpretations using an LLM 2️⃣ Infill missing interpretations 3️⃣ Parse each into SQL (standard text-to-SQL)
1
0
1
@irisaparina
Irina Saparina
10 months
Congratulations to Dr. Parag Jain @jparag123 🥳🎓 Well deserved!
1
0
15
@rohit_saxena
Rohit Saxena
10 months
LLMs can tackle math olympiad probs but... can they read a clock 🤔? 🕰️📆 Our experiments reveal surprising failures in temporal reasoning—MLLMs struggle with analogue clock reading & date inference! Lost in Time: Clock and Calendar Understanding Challenges in Multimodal LLMs
4
23
81
@dmsobol
Daria Soboleva ✈️ NeurIPS
1 year
At #NeurIPS2024 with such amazing people!
2
1
8
@irisaparina
Irina Saparina
1 year
See you tomorrow (Thursday) from 11 pm to 2 pm in West Ballroom A-D #5309 🚀
0
0
0