Neel Bhandari
@NeelBhandari9
Followers
308
Following
90
Media
14
Statuses
609
Masters Student @LTIatCMU | ML Scientist @PayPal | Open Research @CohereForAI Community | Previously External Research Student @MITIBMLab. Views my own.
Bengaluru South, India
Joined September 2016
Our work has been accepted at the EACL 2026 main conference!
1/๐จ ๐ก๐ฒ๐ ๐ฝ๐ฎ๐ฝ๐ฒ๐ฟ ๐ฎ๐น๐ฒ๐ฟ๐ ๐จ RAG systems excel on academic benchmarks - but are they robust to variations in linguistic style? We find RAG systems are brittle. Small shifts in phrasing trigger cascading errors, driven by the complexity of the RAG pipeline ๐งต
0
0
7
Super happy to receive the Best Paper Award at #NeurIPS2025 for our Artificial Hivemind paper!! (Really enjoyed giving oral talk at NeurIPS as well!)
โ ๏ธDifferent models. Same thoughts.โ ๏ธ Todayโs AI models converge into an ๐๐ซ๐ญ๐ข๐๐ข๐๐ข๐๐ฅ ๐๐ข๐ฏ๐๐ฆ๐ข๐ง๐ ๐, a striking case of mode collapse that persists even across heterogeneous ensembles. Our #neurips2025 ๐&๐ ๐๐ซ๐๐ฅ ๐ฉ๐๐ฉ๐๐ซ (โจ๐ญ๐จ๐ฉ ๐.๐๐%โจ) dives deep into
37
67
781
Excited to teach Advanced NLP at CMU again this semester! Slides are on the course page as the course proceeds: https://t.co/xsqARaZEK9 Lectures will be uploaded to Youtube: https://t.co/4kfXvS2MCb
5
91
582
Super excited and honored to received this award! ๐ฅฐ
A hearty congratulations to the LTI's @MaartenSap, who's been awarded an @OkawaFoundation Research Grant for his work in his work in socially-aware artificial intelligence.
12
6
99
LMArena is widely used for model evaluation, but is it measuring true progress? ๐ฎ In our work, "The Leaderboard Illusion", we reveal: ๐ Private testing ๐ Data access asymmetries โ ๏ธ Overfitting risks ๐ซ Silent deprecations Despite best intentions, arena policies favor a few!
9
38
201
Excited to announce our #NAACL2025 Oral paper! ๐โจ We carried out the largest systematic study so far to map the links between upstream choices, intrinsic bias, and downstream zero-shot performance across 131 CLIP Vision-language encoders, 26 datasets, and 55 architectures!
1
10
32
Very proud of this work which is being presented @iclr_conf later today. While I will not be there โ Catch up with @viraataryabumi and @ahmetustun89 who are both fantastic and can share more about our work at both @Cohere_Labs and @cohere. ๐ฅโจ
In our latest work, we ask โwhat is the impact of code data used in pre-training on non-code tasks?โ Work w @viraataryabumi, @yixuan_su, @rayhascode, @adrien_morisot, @1vnzh, @acyr_l, @mziizm, @ahmetustun89 @sarahookr ๐ https://t.co/CxkgHqZEGB
4
17
89
๐จNew preprint ๐จ Iโm super excited to share our work: To Code, or Not To Code? Exploring the Impact of Code in Pre-training ๐: https://t.co/HCOvCz6hfp w/ @yixuan_su, @rayhascode, @adrien_morisot, @1vnzh, @acyr_l, @mziizm, @ahmetustun89, @sarahookr [1/n]
10
36
182
Very excited, obviously about the work, but also because I finally got to make a Taylor Swift reference in a paper title!!
1/๐จ ๐ก๐ฒ๐ ๐ฝ๐ฎ๐ฝ๐ฒ๐ฟ ๐ฎ๐น๐ฒ๐ฟ๐ ๐จ RAG systems excel on academic benchmarks - but are they robust to variations in linguistic style? We find RAG systems are brittle. Small shifts in phrasing trigger cascading errors, driven by the complexity of the RAG pipeline ๐งต
2
1
35
Excited to share PolyGuard ๐ก๏ธ, our new state-of-the-art multilingual safety detector. PolyGuard supports 17 languages and outperforms all open-source and commercial moderation tools!
Need a multilingual safety detector? ๐จIntroducing PolyGuard๐จ โ๏ธ supports 17 languages โ๏ธ generates structured output for prompt safety, response safety, and model refusal ๐ outperforms existing SOTA open and commercial safety detectors by 5.5% ๐ https://t.co/lz8R1nnjFd๐งต
1
6
15
Real user queries often look different from the clean, concise ones in academic benchmarks - ambiguity, full of typos, and much less readable. We show that even strong RAG systems quickly break under these conditions. Awesome project led by @NeelBhandari9 and @tianyu_cao_24!!
1/๐จ ๐ก๐ฒ๐ ๐ฝ๐ฎ๐ฝ๐ฒ๐ฟ ๐ฎ๐น๐ฒ๐ฟ๐ ๐จ RAG systems excel on academic benchmarks - but are they robust to variations in linguistic style? We find RAG systems are brittle. Small shifts in phrasing trigger cascading errors, driven by the complexity of the RAG pipeline ๐งต
1
8
38
These days RAG systems have gotten popular for boosting LLMsโbut they're brittle๐. Minor shifts in phrasing (โ๏ธ style, politeness, typos) can wreck the pipeline. Even advanced components donโt fix the issue. Check out this extensive eval by @NeelBhandari9 and @tianyu_cao_24!
1/๐จ ๐ก๐ฒ๐ ๐ฝ๐ฎ๐ฝ๐ฒ๐ฟ ๐ฎ๐น๐ฒ๐ฟ๐ ๐จ RAG systems excel on academic benchmarks - but are they robust to variations in linguistic style? We find RAG systems are brittle. Small shifts in phrasing trigger cascading errors, driven by the complexity of the RAG pipeline ๐งต
1
2
6
11/ This paper has been an incredible effort across institutions @LTIatCMU @uwcse . Huge thanks to my co-first author @tianyu_cao_24 and co-authors @akhila_yerukola @AkariAsai @MaartenSap โจ๐
0
0
7
10/ ๐ฌCode: https://t.co/f1o0WViWGu ๐Paper: "Out of Style: RAGโs Fragility to Linguistic Variation": https://t.co/yaC3h0FoHu Read our paper for more details on impact of scaling retrieved documents, specific effects of each linguistic variation on RAG pipelines and much more!
arxiv.org
Despite the impressive performance of Retrieval-augmented Generation (RAG) systems across various NLP benchmarks, their robustness in handling real-world user-LLM interaction queries remains...
1
0
3
9/ ๐จ Takeaway RAG systems suffer major performance drops from simple linguistic variations. Advanced techniques offer temporary relief, but real robustness demands fundamental changes - more resilient components and fewer cascading error in order to serve all users effectively.
1
0
2
8/๐ ๏ธ Adding advanced techniques to vanilla RAG improve robustness... sometimes๐ซ โ
Reranking improves performance on rewrites, but gaps in performance with original queries remain. โ ๏ธ HyDE helps rewritten queries but hurts original queries-creating a false sense of robustness
1
0
2
7/๐คWell, maybe scaling generation model size helps? Scaling up LLM size helps narrow the performance gap between original and rewritten queries. However, this is not consistent across variations. Larger models occasionally worsen the impact, particularly with RTT variations.
1
0
3
6/โ๏ธ RAG is more fragile than LLM-only setups RAGโs retrieval-generation pipeline amplifies linguistic errors, leading to greater performance drops. On PopQA, RAG degrades by 23% vs. just 11% for the LLM-only setup. โ ๏ธThe main culprit? Retrieval emerges as the weakest link.
1
0
1
5/๐งฉ Generation Fragility Linguistic variations lead to generation accuracy drops-Exact Match score down by up to ~41%, Answer Match score by up to ~17%. Structural changes from RTT are particularly damaging, significantly reducing response accuracy.
1
0
1
4/๐Retrieval Robustness Retrieval recall plummets up to 40.41% due to linguistic variations, especially when exposed to informal queries. Grammatical errors like RTT and typos notably degrade performance, highlighting retrieversโ sensitivity to a number of linguistic variations
1
0
2