
Explainable Machine Learning
@ExplainableML
Followers
3K
Following
148
Media
72
Statuses
237
Institute for Explainable Machine Learning @HelmholtzMunich and Interpretable and Reliable Machine Learning group @TU_Muenchen
Munich, Germany
Joined August 2021
RT @LucaEyring: Reward hacking is challenging when fine-tuning few-step Diffusion models. Direct fine-tuning on rewards can create artifact….
0
51
0
RT @confusezius: 💫 After four PhD years on all things multimodal, pre- and post-training, I’m super excited for a new research chapter @Goo….
0
13
0
6/.Disentanglement of Correlated Factors via Hausdorff Factorized Support (ICLR 2023) .@confusezius, @marksibrahim, @zeynepakata, Pascal Vincent, @D_Bouchacourt .[Paper]: [Code]:
github.com
A benchmarking suite for disentanglement algorithms, suited for evaluating robustness to correlated factors. Codebase for the paper "Disentanglement of Correlated Factors via Hausdorff Fac...
0
0
1
5/.Waffling around for Performance: Visual Classification with Random Words and Broad Concepts (ICCV 2023).@confusezius*, @jaemyung_kim*, @ASophiaKoepke, @CordeliaSchmid, @zeynepakata .[Paper]: [Code]:
github.com
Official repository for the ICCV 2023 paper: "Waffling around for Performance: Visual Classification with Random Words and Broad Concepts" - ExplainableML/WaffleCLIP
1
1
1
4/.Vision-by-Language for Training-Free Composed Image Retrieval (ICLR 2024) .@ShyamgopalKart1*, @confusezius*, Massimiliano Mancini, @zeynepakata .[Paper]: [Code]:
github.com
[ICLR 2024] Official repository for "Vision-by-Language for Training-Free Compositional Image Retrieval" - ExplainableML/Vision_by_Language
1
0
0
3/.Fantastic Gains and Where to Find Them (ICLR 2024 Spotlight) .@confusezius*, @lukas_thede*, @ASophiaKoepke, @OriolVinyalsML, @olivierhenaff, @zeynepakata .[Paper]: [Code]:
github.com
[ICLR 2024] Fantastic Gains and Where to Find Them: On the Existence and Prospect of General Knowledge Transfer between Any Pretrained Model - ExplainableML/FantasticGains
1
0
0
2/.A Practitioner's Guide to Continual Multimodal Pretraining (NeurIPS 2024) @confusezius*, @vishaal_urao*, @sbdzdz, @AmyPrb, @mehdidc, @OriolVinyalsML, @olivierhenaff, @SamuelAlbanie, @MatthiasBethge, @zeynepakata .[Paper]: [Code]:
github.com
Code and benchmark for the paper: "A Practitioner's Guide to Continual Multimodal Pretraining" [NeurIPS'24] - ExplainableML/fomo_in_flux
1
0
1
1/.Context-Aware multimodal pretraining (CVPR 2025 Highlight) @confusezius, @zeynepakata, @dimadamen, @ibalazevic, @olivierhenaff .[Paper]:
arxiv.org
Large-scale multimodal representation learning successfully optimizes for zero-shot transfer at test time. Yet the standard pretraining paradigm (contrastive learning on large amounts of...
1
0
2
🎓PhD Spotlight: Karsten Roth. Celebrate @confusezius, who defended his PhD on June 24th summa cum laude! . Karsten has been an @ELLISforEurope and IMPRS-IS PhD student since May 2021, supervised by both @zeynepakata and @OriolVinyalsML. His research has been centered around
2
6
46
RT @kirill_bykov: Personal news: I have defended my PhD thesis “Explaining Representations in Deep Neural Networks” at TU Berlin with summa….
0
2
0
RT @SimonRoschmann: How can we circumvent data scarcity in the time series domain?. We propose to leverage pretrained ViTs (e.g., CLIP, DIN….
0
21
0
RT @L_Salewski: I am very happy to announce that I successfully defended my PhD thesis with the title "Advancing Multimodal Explainability:….
0
3
0
RT @robinhesse_: Got a strong XAI paper rejected from ICCV? Submit it to our ICCV eXCV Workshop today—we welcome high-quality work!.🗓️ Subm….
0
22
0
RT @zeynepakata: It is a great honor to receive the ZukunftsWissen Prize 2025 from the German Academy of the Sciences @Leopoldina with gene….
0
6
0
5/.BayesCap: Bayesian Identity Cap for Calibrated Uncertainty in Frozen Neural Networks (ECCV 2022).Uddeshya Upadhyay* , @ShyamgopalKart1 * , @yanbei_c , Massimiliano Mancini , and @zeynepakata .[Paper]: [Code]:
github.com
(ECCV 2022) BayesCap: Bayesian Identity Cap for Calibrated Uncertainty in Frozen Neural Networks - ExplainableML/BayesCap
0
1
1
4/.KG-SP: Knowledge Guided Simple Primitives for Open World Compositional Zero-Shot Learning (CVPR 2022).@ShyamgopalKart1, Massimiliano Mancini, @zeynepakata .[Paper]: [Code]:
github.com
PyTorch code of our KG-SP method for Compositional Zero-Shot Learning - ExplainableML/KG-SP
1
1
1
3/.Vision-by-Language for Training-Free Compositional Image Retrieval (ICLR 2024).@ShyamgopalKart1*, @confusezius*, Massimiliano Mancini, @zeynepakata .[Paper]: [Code]:
github.com
[ICLR 2024] Official repository for "Vision-by-Language for Training-Free Compositional Image Retrieval" - ExplainableML/Vision_by_Language
1
0
0
2/.EgoCVR: An Egocentric Benchmark for Fine-Grained Composed Video Retrieval (ECCV 2024).@hummelth_* , @ShyamgopalKart1*, Mariana-Iuliana Georgescu, @zeynepakata .[Paper]: [Code]:
github.com
[ECCV 2024] EgoCVR: An Egocentric Benchmark for Fine-Grained Composed Video Retrieval - ExplainableML/EgoCVR
1
0
0
1/.ReNO: Enhancing One-step Text-to-Image Models through Reward-based Noise Optimization (NeurIPS 2024).@LucaEyring*, @ShyamgopalKart1*, @confusezius , @zeynepakata .[Paper]: [Code]:
github.com
[NeurIPS 2024] ReNO: Enhancing One-step Text-to-Image Models through Reward-based Noise Optimization - ExplainableML/ReNO
1
2
2
🎓PhD Spotlight: Shyamgopal Karthik. Celebrate @ShyamgopalKart1 , who will defend his PhD on 23rd June! Shyam has been a PhD student @uni_tue since October 2021, supervised by @zeynepakata. His research has been centred around compositionality in vision and language. In
1
6
52