
Neha Srikanth
@nehasrikanth
Followers
497
Following
3K
Media
6
Statuses
50
CS PhD student @umdcs @ClipUmd || natural language processing || prev @lyft, @UTCompSci
College Park, MD
Joined October 2020
When questions are poorly posed, how do humans vs. models handle them? Our #ACL2025 paper explores this + introduces a framework for detecting and analyzing poorly-posed information-seeking questions! . Joint work with @boydgraber & @rachelrudinger!. 🔗
6
14
61
@boydgraber @rachelrudinger Check out our paper for more details! . 📍I'm unfortunately not at ACL this time, but our poster will be up at Poster Session 4 (Session 12) on Wednesday, July 30, 11:00-12:30, Hall 4/5!.
0
0
2
@boydgraber @rachelrudinger We also discuss observations about the *process* of asking a question (e.g. asker replies are more likely when q's are poorly posed, more likely to be positive, non-dominant interpretations get better feedback, and user-refined q's drift further from dominant interpretations).
0
0
1
@boydgraber @rachelrudinger Both humans and models produce high-entropy interpretation distributions on poorly-posed questions - but they converge on different interpretations when questions are clear!.
0
0
1
@boydgraber @rachelrudinger We can measure "poorly-posedness" using entropy of interpretation distributions. High entropy = answerers can't agree on what the asker wants. Low entropy = clear dominant interpretation emerges.
0
0
0
@boydgraber @rachelrudinger We collected 500 (question, answer, OP reply) interactions from r/NoStupidQuestions where askers gave feedback on whether their info need was met. Expert linguists annotated what interpretation each answerer actually addressed.
0
0
0
@boydgraber @rachelrudinger What makes a question poorly posed? When answerers can't identify a dominant interpretation despite reasoning about the asker's intent. Example: "Are chiropractors considered doctors?" → Could mean titles, training, legal powers, etc.
0
0
3
RT @rachelrudinger: I'm really excited about this new line of work with my collaborators at UMD and ARL on detecting common ground misalign….
0
6
0
RT @rupak_53: Linguistic theory tells us that common ground is essential to conversational success. But to what extent is it essential? Can….
0
7
0
RT @NishantBalepur: I'll be presenting two papers @naaclmeeting!.1. Why LLMs can't write a question with the answer "468" 🤔🙅.2. A multi-age….
0
16
0
RT @YooYeonSung1: 🏆ADVSCORE won an Outstanding Paper Award at #NAACL2025 @naaclmeeting!!. If you want to learn how to make your benchmark *….
0
17
0
@rachelrudinger Read more at . A huge shoutout to my advisor @rachelrudinger and everyone else in the CLIP lab at UMD for their support and feedback :-) (6/6).
aclanthology.org
Neha Srikanth, Rachel Rudinger. Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long...
0
0
0
@rachelrudinger This helps us build groups of examples that evaluate the same pieces of knowledge, allowing us to measure under what *contexts* an LLM can correctly draw a particular inference ("inferential consistency"). We find that LLMs still exhibit room for improvement on this front. (5/n).
0
0
0
@rachelrudinger We propose a method to pinpoint the particular pieces of knowledge a defeasible reasoning example aims to evaluate by identifying the atom(s) that are most critical in determining the overall label of a defeasible NLI example. (4/n).
0
0
0
@rachelrudinger We also explore how atomic hypothesis decomposition can help us better understand the complexities of defeasible reasoning, a softer inference task that requires models to weigh the effects of multiple, sometimes competing, pieces of evidence on a hypothesis. (3/n).
0
0
0
@rachelrudinger For example, after decomposing hypothesis from an NLI premise-hypothesis pair into atoms, we can measure whether its judgment on the overall pair is consistent with its set of judgments on each premise-atom sub-problem in a logical way. (2/n).
0
0
0
I'll be presenting this work with @rachelrudinger at #NAACL2025 tomorrow (Wednesday 4/30) during Session C (Oral/Poster 2) at 2pm! 🔬 . Decomposing hypotheses in traditional NLI and defeasible NLI helps us measure various forms of consistency of LLMs. Come join us!
5
9
51
RT @maharshigor: 1/ 🗞️🚨 Do great minds really think alike? 🧠🤖👨🏽🦱.We investigate Human-AI Complementarity in Question Answering using CAIM….
0
26
0
Very excited for this work to be out!! Shout-out to our collaborators in the UMD School of Public Health (@quynhcnguyen, @DrLizAparicio, and Heran Mane) for their insightful contributions. And of course, thank you to my advisors @boydgraber, and @rachelrudinger!.
0
0
3