steverab Profile Banner
Stephan Rabanser Profile
Stephan Rabanser

@steverab

Followers
519
Following
465
Media
1K
Statuses
10K

Incoming Postdoctoral Researcher @Princeton. Reliable, safe, trustworthy machine learning. Previously: @UofT @VectorInst @TU_Muenchen @Google @awscloud

Toronto, Ontario
Joined April 2010
Don't wanna be here? Send us removal request.
@steverab
Stephan Rabanser
24 days
๐ŸŽ“ PhDone! After 5 intense years, Iโ€™m thrilled to share that Iโ€™ve just passed my final oral examination and had my thesis accepted without corrections! ๐Ÿฅณ
Tweet media one
Tweet media two
Tweet media three
8
0
105
@steverab
Stephan Rabanser
10 days
RT @MLStreetTalk: We might not need cryptography anymore for some applications. Because ML models are drastically changing notions of trusโ€ฆ.
0
9
0
@grok
Grok
5 days
Join millions who have switched to Grok.
217
429
4K
@steverab
Stephan Rabanser
24 days
I also want to thank all of my (ex) lab-mates, paper collaborators, and the broader research community at @UofT and @VectorInst for their support and inspiration! Excited for my next chapter at @Princeton @PrincetonCITP with @random_walker and @msalganik !.
0
0
1
@steverab
Stephan Rabanser
24 days
Iโ€™m incredibly grateful to my advisor @NicolasPapernot (fun fact: I am his first PhD graduate!), my supervisory committee (@rahulgk , @DavidDuvenaud , @RogerGrosse , @zacharylipton ), and my examination committee (@Aaroth , @cjmaddison , Roman Genov).
1
0
6
@steverab
Stephan Rabanser
24 days
Thesis Title: Uncertainty-Driven Reliability: Selective Prediction and Trustworthy Deployment in Modern Machine Learning. Thesis Link: Defense Slides:
1
0
1
@steverab
Stephan Rabanser
1 month
RT @abeirami: Instead of complaining that peer review is dead, take a positive step to improve it today. The reviewers are not aliens, theyโ€ฆ.
0
15
0
@steverab
Stephan Rabanser
1 month
More on this work:. ๐Ÿ“„ Our workshop paper: ๐Ÿ–ผ๏ธ Our award-winning poster: ๐Ÿ› ๏ธ Check out the workshop for more new research on efficient on-device machine learning:
0
0
0
@steverab
Stephan Rabanser
1 month
Thanks to all my amazing collaborators at Google for hosting me for this internship in Zurich and for making this work possible: Nathalie Rauschmayr, Achin (Ace) Kulshrestha, Petra Poklukar, @wittawatj, @seanAugenstein, @ccwang1992, and @fedassa!.
1
1
1
@steverab
Stephan Rabanser
1 month
In our work, we introduce Gatekeeper: a novel loss function that calibrates smaller models in cascade setups to confidently handle easy tasks while deferring complex ones. Gatekeeper significantly improves deferral performance across a diverse set of architectures and tasks.
Tweet media one
Tweet media two
Tweet media three
1
1
1
@steverab
Stephan Rabanser
1 month
๐Ÿ… Very excited to share that my recent Google internship project on model cascading has received the ๐—•๐—ฒ๐˜€๐˜ ๐—ฃ๐—ผ๐˜€๐˜๐—ฒ๐—ฟ ๐—”๐˜„๐—ฎ๐—ฟ๐—ฑ at the ๐˜›๐˜›๐˜–๐˜‹๐˜“๐˜ฆ๐˜ณ-๐˜๐˜” ๐˜ž๐˜ฐ๐˜ณ๐˜ฌ๐˜ด๐˜ฉ๐˜ฐ๐˜ฑ @ ๐˜๐˜Š๐˜”๐˜“! Thanks a lot to the organizers for setting up this amazing workshop!
Tweet media one
2
1
9
@steverab
Stephan Rabanser
1 month
RT @adam_dziedzic: ๐Ÿšจ Join us at ICML 2025 for the Workshop on Unintended Memorization in Foundation Models (MemFM)! ๐Ÿšจ. ๐Ÿ“… Saturday, July 19โ€ฆ.
0
7
0
@steverab
Stephan Rabanser
2 months
RT @VectorInst: Happy #AIAppreciationDay! ๐ŸŽ‰ What better way to celebrate than showcasing even more Vector researchers advancing AI at #ICMLโ€ฆ.
0
2
0
@steverab
Stephan Rabanser
2 months
RT @yucenlily: In our new ICML paper, we show that popular families of OOD detection procedures, such as feature and logit based methods, aโ€ฆ.
Tweet card summary image
arxiv.org
To detect distribution shifts and improve model safety, many out-of-distribution (OOD) detection methods rely on the predictive uncertainty or features of supervised models trained on...
0
49
0
@steverab
Stephan Rabanser
2 months
๐Ÿ“„ Gatekeeper: Improving Model Cascades Through Confidence Tuning.Paper โžก๏ธ Workshop โžก๏ธ Tiny Titans: The next wave of On-Device Learning for Foundational Models (TTODLer-FM).Poster โžก๏ธ West Meeting Room 215-216 on Sat 19 Jul 3:00 p.m. โ€” 3:45 p.m.
Tweet card summary image
arxiv.org
Large-scale machine learning models deliver strong performance across a wide range of tasks but come with significant computational and resource constraints. To mitigate these challenges, local...
0
0
1
@steverab
Stephan Rabanser
2 months
๐Ÿ“„ Selective Prediction Via Training Dynamics.Paper โžก๏ธ Workshop โžก๏ธ 3rd Workshop on High-dimensional Learning Dynamics (HiLD).Poster โžก๏ธ West Meeting Room 118-120 on Sat 19 Jul 10:15 a.m. โ€” 11:15 a.m. & 4:45 p.m. โ€” 5:30 p.m.
Tweet card summary image
arxiv.org
Selective Prediction is the task of rejecting inputs a model would predict incorrectly on. This involves a trade-off between input space coverage (how many data points are accepted) and model...
1
0
1
@steverab
Stephan Rabanser
2 months
๐Ÿ“„ Suitability Filter: A Statistical Framework for Classifier Evaluation in Real-World Deployment Settings (โœจ oral paper โœจ).Paper โžก๏ธ Poster โžก๏ธ E-504 on Thu 17 Jul 4:30 p.m. โ€” 7 p.m. Oral Presentation โžก๏ธ West Ballroom C on Thu 17 Jul 4:15 p.m. โ€” 4:30 p.m.
Tweet card summary image
arxiv.org
Deploying machine learning models in safety-critical domains poses a key challenge: ensuring reliable model performance on downstream user data without access to ground truth labels for direct...
1
0
0
@steverab
Stephan Rabanser
2 months
๐Ÿ“„ Confidential Guardian: Cryptographically Prohibiting the Abuse of Model Abstention.TL;DR โžก๏ธ We show that a model owner can artificially introduce uncertainty and provide a detection mechanism. Paper โžก๏ธ Poster โžก๏ธ E-1002 on Wed 16 Jul 11 a.m. โ€” 1:30 p.m.
Tweet card summary image
arxiv.org
Cautious predictions -- where a machine learning model abstains when uncertain -- are crucial for limiting harmful errors in safety-critical applications. In this work, we identify a novel threat:...
1
0
0
@steverab
Stephan Rabanser
2 months
๐Ÿ“ฃ I will be at #ICML2025 in Vancouver next week to present two main conference papers (including one oral paper โœจ) and two workshop papers! Say hi if you are around and want to chat about ML uncertainty & reliability! ๐Ÿ˜Š. ๐Ÿงต Papers in order of presentation below:
Tweet media one
1
0
8
@steverab
Stephan Rabanser
2 months
RT @AliShahinShams1: Can safety become a smokescreen for harm?#icml2025 . ML abstain when uncertainโ€”a safeguard to prevent catastrophic errโ€ฆ.
0
1
0
@steverab
Stephan Rabanser
3 months
RT @polkirichenko: Excited to release AbstentionBench -- our paper and benchmark on evaluating LLMsโ€™ *abstention*: the skill of knowing wheโ€ฆ.
0
81
0