Anurag Singh
@_anurags14
Followers
84
Following
1K
Media
3
Statuses
130
PhDing at The Rational Intelligence Lab at Helmholtz CISPA 🇩🇪 Past: TU Munich 🇩🇪, IISc & NSIT, India 🇮🇳
Saarbrücken Germany
Joined November 2020
🚨 New Preprint Alert! 🚨 Are you interested in Imprecise Probability (IP)? Then check out our latest preprint "Truthful Elicitation of Imprecise Forecasts". Joint work with @Chau9991 and @krikamol. https://t.co/pzpPEe1UOh A quick thread🧵(1/3)
arxiv.org
The quality of probabilistic forecasts is crucial for decision-making under uncertainty. While proper scoring rules incentivize truthful reporting of precise forecasts, they fall short when...
1
9
17
🚨I’m more than happy to share our new work: A critical question for any second order uncertainty quantification is to ask “even if valid, what to do with it?”. Our answer is this work! We offer coverage guarantee per input, and return sets that are optimally efficient.
1/5 Ever wondered how to apply conformal prediction when there's epistemic uncertainty? Our new paper addresses this question!CP can benefit from models like Bayesian, evidential, and credal predictors to have better prediction sets,for instance, in terms of conditional coverage.
0
2
5
🧠 How do we compare uncertainties that are themselves imprecisely specified? 💡Meet IIPM (Integral IMPRECISE probability metrics) and MMI (Maximum Mean IMPRECISION): frameworks to compare and quantify Epistemic Uncertainty! With the amazing @mic_caprio and @krikamol 🚀
In this work, we introduce the Integral Imprecise Probability Metric (IIPM) framework, a Choquet integral-based generalisation of classical Integral Probability Metric (IPM) to the setting of capacities.
1
10
26
🚨 🇪🇺 Seeking a postdoc opportunity under the 2025 call for the Marie Sklodowska-Curie Actions (MSCA 2025) Postdoctoral Fellowships? 😎 Come work with the Rational Intelligence Lab at CISPA in Saarbrücken, Germany. 🔗 https://t.co/kRtFXc8DQo RT Please 🙏
0
7
31
🎉Thrilled to share that our paper “Truthful Elicitation of Imprecise Forecasts” has been accepted as an Oral presentation at #UAI2025! 🙌 Check it out: https://t.co/RBJBEsB5Va
@_anurags14 @krikamol
Amazing work led by our amazing PhD student @_anurags14 on investigating the interplay between forecast indeterminacy and proper scoring mechanism! Check it out!!
0
4
11
(3/3) For a long time, the IP community has believed that a strictly proper scoring rule for IP is impossible. While true, the impossibility is only for deterministic rules! Our results propose a randomized strictly proper rule for IP. Check out our preprint for more details!
0
0
0
(2/3) Apart from Bayes' Rule, strictly proper scoring rules are another fundamental component of the Bayesian perspective. They provide the correct incentives for truthful probabilistic reporting, called Truthful Elicitation. Our Motivation: Can we do something similar for IP?
1
0
0
I am at @icml2024 with @_anurags14 and @krikamol to present our spotlight work on domain generalisation via imprecise learning this week! Come and have a discussion if you are interested in uncertainty-aware ML, explainability, and preference modelling! #ICML2024
0
3
25
✈️ I’ll be at @icmlconf next week together with @Chau9991 and @_anurags14 to present our work on imprecise generalization. Looking forward to catching up with everyone. #ICML2024
0
1
18
@CISPA @Chau9991 @krikamol Of course, if you want to read a bit more technical stuff, you can refer to another blog that contains a technical introduction. In case you are in-person at ICML, drop by our spotlight presentation on Thursday from 11:30 AM :D https://t.co/e7stCN8zI5
0
0
3
@CISPA @Chau9991 @krikamol I do a non-technical ELIA5 on Imprecise Learning in a 4 min-read medium article. https://t.co/q247yxAkpA
medium.com
In the real world, new conditions and changing scenarios often differ from training data, causing current ML models to fail. Let’s explore…
1
0
4
@CISPA @Chau9991 @krikamol 7. Our ICML work takes ideas from the above fields to propose a new perspective on model alignment and generalisation! DGIL- Domain generalisation via Imprecise Learning https://t.co/biX95ZliXn
arxiv.org
Out-of-distribution (OOD) generalisation is challenging because it involves not only learning from empirical data, but also deciding among various notions of generalisation, e.g., optimising the...
1
0
2
6. Also, collaborate! In academia, we can collaborate more freely as compared to industry. In my lab ( https://t.co/tn5DhPW2vo)
@CISPA, @Chau9991, @krikamol and I are actively interested in these ideas. We will be there at ICML in person, explaining more about our work! (8/n)
1
0
2
5. Take inspiration from Imprecise Probability (IP)! IP is a well-studied field of statistics that can operationalise this awareness about epistemic uncertainty. It also has connections with ideas we discuss in economics and decision-making above! https://t.co/Qunt8b4J7D (7/n)
philpapers.org
1
0
2
(cont.) For example, my model says that the P(head)=0.5 for the coin because it is fair or because it doesn’t know anything about the coin can lead to drastically different decisions. In the first case, it's a fair coin, in the second case I need to collect more data! (6/n)
1
0
3
4. Better alignment needs better epistemic awareness! Our models need to know what they don’t know. This has huge implications when we want to use model outputs to make decisions. (5/n)
1
0
2
3. Take inspiration from Social Choice Theory (SCT)! Preferences, aggregations and consensus have been studied extensively in SCT. RLHF and DPO are utilitarian (political philosophy *wink wink*). There are also results in SCT that go beyond the average aggregation. 4/n
1
0
2
2. Take inspiration from Economics! Economists have studied ideas around utility, and decision-making for a long time. But how does utility relate to ML? Min. Objective == Max utility. This is the utility that developers bake into models. https://t.co/mhUecyK3UQ (3/n)
en.wikipedia.org
1
0
2
1. Model Alignment and generalisation desperately need a new perspective! To study generalisation better and interpret model behavior we need to start interpreting model training as optimising for a behaviour! How can we do that? We have some possible hunches (2/n)
1
0
2
AI industry is giving a huge push to AI Alignment. For example, Open AI below. In my discussions with ML researchers in academia, I hear too much pessimism about being GPU-poor. I want to discuss what academia can contribute to this field! 🧵(1/n) https://t.co/Gcn4rtlFTn
spectrum.ieee.org
Jan Leike explains OpenAI's effort to protect humanity from superintelligent AI
1
3
13