
Stephan Rabanser
@steverab
Followers
519
Following
465
Media
1K
Statuses
10K
Incoming Postdoctoral Researcher @Princeton. Reliable, safe, trustworthy machine learning. Previously: @UofT @VectorInst @TU_Muenchen @Google @awscloud
Toronto, Ontario
Joined April 2010
๐ PhDone! After 5 intense years, Iโm thrilled to share that Iโve just passed my final oral examination and had my thesis accepted without corrections! ๐ฅณ
8
0
105
RT @MLStreetTalk: We might not need cryptography anymore for some applications. Because ML models are drastically changing notions of trusโฆ.
0
9
0
I also want to thank all of my (ex) lab-mates, paper collaborators, and the broader research community at @UofT and @VectorInst for their support and inspiration! Excited for my next chapter at @Princeton @PrincetonCITP with @random_walker and @msalganik !.
0
0
1
Iโm incredibly grateful to my advisor @NicolasPapernot (fun fact: I am his first PhD graduate!), my supervisory committee (@rahulgk , @DavidDuvenaud , @RogerGrosse , @zacharylipton ), and my examination committee (@Aaroth , @cjmaddison , Roman Genov).
1
0
6
Thanks to all my amazing collaborators at Google for hosting me for this internship in Zurich and for making this work possible: Nathalie Rauschmayr, Achin (Ace) Kulshrestha, Petra Poklukar, @wittawatj, @seanAugenstein, @ccwang1992, and @fedassa!.
1
1
1
๐
Very excited to share that my recent Google internship project on model cascading has received the ๐๐ฒ๐๐ ๐ฃ๐ผ๐๐๐ฒ๐ฟ ๐๐๐ฎ๐ฟ๐ฑ at the ๐๐๐๐๐๐ฆ๐ณ-๐๐ ๐๐ฐ๐ณ๐ฌ๐ด๐ฉ๐ฐ๐ฑ @ ๐๐๐๐! Thanks a lot to the organizers for setting up this amazing workshop!
2
1
9
RT @adam_dziedzic: ๐จ Join us at ICML 2025 for the Workshop on Unintended Memorization in Foundation Models (MemFM)! ๐จ. ๐
Saturday, July 19โฆ.
0
7
0
RT @VectorInst: Happy #AIAppreciationDay! ๐ What better way to celebrate than showcasing even more Vector researchers advancing AI at #ICMLโฆ.
0
2
0
RT @yucenlily: In our new ICML paper, we show that popular families of OOD detection procedures, such as feature and logit based methods, aโฆ.
arxiv.org
To detect distribution shifts and improve model safety, many out-of-distribution (OOD) detection methods rely on the predictive uncertainty or features of supervised models trained on...
0
49
0
๐ Gatekeeper: Improving Model Cascades Through Confidence Tuning.Paper โก๏ธ Workshop โก๏ธ Tiny Titans: The next wave of On-Device Learning for Foundational Models (TTODLer-FM).Poster โก๏ธ West Meeting Room 215-216 on Sat 19 Jul 3:00 p.m. โ 3:45 p.m.
arxiv.org
Large-scale machine learning models deliver strong performance across a wide range of tasks but come with significant computational and resource constraints. To mitigate these challenges, local...
0
0
1
๐ Selective Prediction Via Training Dynamics.Paper โก๏ธ Workshop โก๏ธ 3rd Workshop on High-dimensional Learning Dynamics (HiLD).Poster โก๏ธ West Meeting Room 118-120 on Sat 19 Jul 10:15 a.m. โ 11:15 a.m. & 4:45 p.m. โ 5:30 p.m.
arxiv.org
Selective Prediction is the task of rejecting inputs a model would predict incorrectly on. This involves a trade-off between input space coverage (how many data points are accepted) and model...
1
0
1
๐ Suitability Filter: A Statistical Framework for Classifier Evaluation in Real-World Deployment Settings (โจ oral paper โจ).Paper โก๏ธ Poster โก๏ธ E-504 on Thu 17 Jul 4:30 p.m. โ 7 p.m. Oral Presentation โก๏ธ West Ballroom C on Thu 17 Jul 4:15 p.m. โ 4:30 p.m.
arxiv.org
Deploying machine learning models in safety-critical domains poses a key challenge: ensuring reliable model performance on downstream user data without access to ground truth labels for direct...
1
0
0
๐ Confidential Guardian: Cryptographically Prohibiting the Abuse of Model Abstention.TL;DR โก๏ธ We show that a model owner can artificially introduce uncertainty and provide a detection mechanism. Paper โก๏ธ Poster โก๏ธ E-1002 on Wed 16 Jul 11 a.m. โ 1:30 p.m.
arxiv.org
Cautious predictions -- where a machine learning model abstains when uncertain -- are crucial for limiting harmful errors in safety-critical applications. In this work, we identify a novel threat:...
1
0
0
๐ฃ I will be at #ICML2025 in Vancouver next week to present two main conference papers (including one oral paper โจ) and two workshop papers! Say hi if you are around and want to chat about ML uncertainty & reliability! ๐. ๐งต Papers in order of presentation below:
1
0
8
RT @AliShahinShams1: Can safety become a smokescreen for harm?#icml2025 . ML abstain when uncertainโa safeguard to prevent catastrophic errโฆ.
0
1
0
RT @polkirichenko: Excited to release AbstentionBench -- our paper and benchmark on evaluating LLMsโ *abstention*: the skill of knowing wheโฆ.
0
81
0