Tiffany Ding Profile
Tiffany Ding

@tifding

Followers
303
Following
103
Media
4
Statuses
18

Statistics PhD student @UCBerkeley

Joined June 2023
Don't wanna be here? Send us removal request.
@tifding
Tiffany Ding
4 months
This is joint work with Jean-Baptiste Fermanian & Joseph Salmon and inspired by discussions with the rest of the @PlantNetProject team (p.s. Check out their awesome plant identification app if you haven’t yet!🌱) (n/n)
0
1
6
@tifding
Tiffany Ding
4 months
All of these tools lead to strong empirical performance on PlantNet and iNaturalist. Check out the looooong tail; this is a tough setting! Some classes are left with 0 holdout examples for calibration (4/n)
1
0
5
@tifding
Tiffany Ding
4 months
These 2 approaches lead to 3 tools for practitioners to try: (1) a new conformal score function (2) a conformal-inspired procedure w/ no formal guarantee but is super lightweight (3) a label-weighted conformal procedure w/ a formal guarantee (3/n)
1
0
5
@tifding
Tiffany Ding
4 months
The goal: Useful prediction sets in long-tailed settings Our solution: Smoothly trade off size and class-conditional coverage by 💡#1: Targeting a weaker notion ("macro-coverage"), or 💡#2: Targeting class-conditional coverage, then “back off” until the sizes are reasonable (2/n)
1
0
4
@tifding
Tiffany Ding
4 months
New paper: Conformal prediction for long-tailed classification🐒 https://t.co/J1PJARDU3M 🧑‍🌾 (plant enthusiast): Help me identify plants! 🤖 (existing conformal algs): Do you want sets that never include rare plants or sets that contain 100s of labels? 🧑‍🌾: Uhh… neither? A🧵(1/n)
Tweet card summary image
arxiv.org
Many real-world classification problems, such as plant identification, have extremely long-tailed class distributions. In order for prediction sets to be useful in such settings, they should (i)...
2
7
44
@jivatneet
Jivat Kaur
9 months
📯 New work! Conformal Prediction Sets with Improved Conditional Coverage using Trust Scores. https://t.co/7Ayo5Izeye How useful are prediction sets that achieve 90% marginal coverage by failing on 10% cases that are challenging? Not very useful for clinicians who require
Tweet card summary image
arxiv.org
Standard conformal prediction offers a marginal guarantee on coverage, but for prediction sets to be truly useful, they should ideally ensure coverage conditional on each test point....
2
17
69
@ml_angelopoulos
Anastasios Nikolas Angelopoulos
1 year
📣Announcing the 2024 NeurIPS Workshop on Statistical Frontiers in LLMs and Foundation Models 📣 Submissions open now, deadline September 15th https://t.co/Q97EWZcu2T If your work intersects with statistics and black-box models, please submit! This includes: ✅ Bias ✅
2
33
123
@tifding
Tiffany Ding
2 years
Sorry in advance if we missed topics! We tried to be as comprehensive as possible given the space constraints
0
0
3
@tifding
Tiffany Ding
2 years
The greatest accomplishment of my statistics career has been winning this year’s @UCBStatistics T-shirt design competition with a @SFBART-inspired shirt designed w/ @aashen12! {stats nerds} ∩ {public transit nerds} ≠ ∅ 📉🚅
4
9
73
@tifding
Tiffany Ding
2 years
Interested in uncertainty quantification and how to make conformal prediction sets more practically useful? Come to our poster at #NeurIPS23! 📍Poster #1623 🕙 Thursday 10:45-12:45 w/ @ml_angelopoulos, @stats_stephen, Michael I. Jordan & Ryan Tibshirani https://t.co/t8QZT827j0
2
15
104
@tifding
Tiffany Ding
2 years
This is joint work with @ml_angelopoulos, @stats_stephen, Michael I. Jordan, and Ryan J. Tibshirani. Please reach out if you’re interested in chatting about conformal prediction or statistics in general! (4/4)
1
1
8
@tifding
Tiffany Ding
2 years
A naive strategy is to split the data classwise and run conformal once per class. But with many classes/limited data, this gives bad results (big sets, etc.) In clustered conformal prediction, we cluster classes that have similar score distributions and pool their data! (3/4)
1
0
5
@tifding
Tiffany Ding
2 years
Standard conformal prediction gives marginal coverage. But most patients are normal, so always predicting {normal} gives marginal coverage. Instead, we’d prefer to get class-conditional coverage, but this is very data-hungry. Our solution: clustered conformal prediction. (2/4)
1
1
8
@tifding
Tiffany Ding
2 years
📢New paper! Class-conditional conformal prediction with many classes https://t.co/t8QZT827j0 👩‍⚕️ (doctor): “Give me uncertainty on the patient diagnosis.” 🤖(conformal): “The 95% prediction set is {normal}.” 👨🏻(patient): ☠️ How do you avoid this negative outcome? A 🧵(1/4)
1
20
91