igstepka Profile Banner
Ignacy Stepka Profile
Ignacy Stepka

@igstepka

Followers
24
Following
211
Media
4
Statuses
20

PhD student @mldcmu

Pittsburgh, PA
Joined September 2017
Don't wanna be here? Send us removal request.
@igstepka
Ignacy Stepka
11 days
RT @inverse_hessian: cool stuff to see at @kdd_news, including results from our senior thesis! #KDD2025.
0
1
0
@grok
Grok
6 days
The most fun image & video creation tool in the world is here. Try it for free in the Grok App.
0
58
506
@igstepka
Ignacy Stepka
13 days
@LangoMateusz @PUT_Poznan @AutonLab @CMU_Robotics 📅 Tuesday 5:45 pm - 8:00 pm in Exhibit Hall F poster no. 437. @LSztukiewicz will present our joint work on the relationship between saliency maps and fairness as part of the Undergraduate and Master’s Consortium. @PUT_Poznan @inverse_hessian. 📄 Paper:
Tweet card summary image
arxiv.org
The widespread adoption of machine learning systems has raised critical concerns about fairness and bias, making mitigating harmful biases essential for AI development. In this paper, we...
1
0
0
@igstepka
Ignacy Stepka
13 days
@LangoMateusz @PUT_Poznan 📅 Monday 8:00 am - 12:00 pm in Room 700. Presenting our work on mitigating persistent client dropout in decentralized federated learning as part of the FedKDD workshop. @AutonLab @CMU_Robotics. 🌐 Project website: 📄 Paper:
1
0
1
@igstepka
Ignacy Stepka
13 days
📅 Tuesday 5:30 - 8 pm (poster no. 141) and Friday 8:55 - 9:15 (Room 801 A, talk). I’ll be giving a talk and presenting a poster on robust counterfactual explanations. @LangoMateusz @PUT_Poznan. 🌐 Project website: 📄 Paper:
Tweet card summary image
arxiv.org
Counterfactual explanations (CFEs) guide users on how to adjust inputs to machine learning models to achieve desired outputs. While existing research primarily addresses static scenarios,...
1
0
1
@igstepka
Ignacy Stepka
13 days
This week I'm presenting some works at @kddconf in Toronto 🇨🇦. Let’s connect if you’re interested in privacy/gradient inversion attacks in federated learning, counterfactual explanations, or fairness and xai!. Here’s where you can find me:.
1
1
7
@igstepka
Ignacy Stepka
1 month
RT @inverse_hessian: Curious about what Time Series Foundation Models actually learn? Stop by our poster today at #ICML2025! .Presented by….
0
3
0
@igstepka
Ignacy Stepka
2 months
RT @CHILconference: Excited to highlight @WPotosnak et al.'s work: a novel hybrid global-local architecture + model-agnostic pharmacokineti….
0
4
0
@igstepka
Ignacy Stepka
3 months
Explore more:. 📄 paper: 👨‍💻 code: 🌐 project page: 👏 Big thanks to my co-authors Jerzy Stefanowski and @LangoMateusz ! #KDD2025 #TrustworthyAI #XAI 7/7🧵.
Tweet card summary image
github.com
Counterfactual Explanations with Probabilistic Guarantees on their Robustness to Model Change (KDD'25) - istepka/betarce
0
0
1
@igstepka
Ignacy Stepka
3 months
📊 Results: Across 6 datasets, BetaRCE consistently achieved target robustness levels while preserving explanation quality and maintaining a competitive robustness-cost trade-off. 6/7🧵
Tweet media one
1
0
0
@igstepka
Ignacy Stepka
3 months
You control both confidence level (α) and robustness threshold (δ), giving mathematical guarantees your explanation will survive changes! For formal proofs on optimal SAM sampling methods and the full theoretical foundation, check out our paper! 5/7🧵
Tweet media one
1
0
0
@igstepka
Ignacy Stepka
3 months
⚙️ Under the hood: BetaRCE explores a "Space of Admissible Models" (SAM) - representing expected/foreseeable changes to your model. Using Bayesian statistics, we efficiently estimate the probability that explanations remain valid across these changes. 4/7🧵.
1
0
0
@igstepka
Ignacy Stepka
3 months
✅ Our solution: BetaRCE - offers probabilistic guarantees for robustness to model change. It works with ANY model class, is post-hoc, and can enhance your current counterfactual methods. Plus, it allows you to control the robustness-cost trade-off. 3/7🧵
1
0
0
@igstepka
Ignacy Stepka
3 months
❌ This happens constantly in real-world AI systems. Current explanation methods don't address this well - they're limited to specific models, require extensive tuning, or lack guarantees about explanation robustness. 2/7🧵.
1
0
0
@igstepka
Ignacy Stepka
3 months
📣 New paper at #KDD2025 on robust counterfactual explanations!. Imagine an AI tells you "Increase income by $200 to get a loan". You do it, but when you reapply, the model has been updated and rejects you anyway. We solve this issue by making CFEs robust to model changes! 1/7🧵
Tweet media one
1
4
8
@igstepka
Ignacy Stepka
8 months
RT @ghostdayamlc: We’re thrilled to announce the first keynote speaker for GHOST DAY: Applied Machine Learning Conference 2025 . Prof. Dr.….
0
5
0
@igstepka
Ignacy Stepka
1 year
RT @CMU_Robotics: The RI Summer Scholars showcased their research yesterday with a fantastic turnout! 46 undergrads from 11 different count….
0
3
0
@igstepka
Ignacy Stepka
1 year
RT @cmu_riss: Get ready for the 2024 RoboLaunch kickoff! 🤖. Come explore robotics & AI through inspiring talks with researchers from around….
0
9
0