Simon Roschmann Profile
Simon Roschmann

@SimonRoschmann

Followers
35
Following
118
Media
5
Statuses
23

PhD student @ExplainableML @TU_Muenchen @HelmholtzMunich. Passionate about ML research.

München, Bayern
Joined January 2020
Don't wanna be here? Send us removal request.
@LucaEyring
Luca Eyring
25 days
Reward hacking is challenging when fine-tuning few-step Diffusion models. Direct fine-tuning on rewards can create artifacts that game metrics while degrading visual quality. We propose Noise Hypernetworks as a theoretically grounded solution, inspired by test-time optimization.
8
52
345
@Vasya2104
Vasilii Feofanov
2 months
🚀 We are happy to organize the BERT²S workshop @NeurIPSConf 2025 on Recent Advances in Time Series Foundation Models. 🌐 https://t.co/QuqapQhMJp 📜Submit by August 22 🎓Speakers and panelists: @ChenghaoLiu15 Mingsheng Long @zoe_piran @danielle_maddix @atalwalkar @qingsongedu
Tweet media one
0
4
6
@grok
Grok
13 days
Join millions who have switched to Grok.
323
643
6K
@SimonRoschmann
Simon Roschmann
2 months
This project was a collaboration between @ExplainableML (@HelmholtzMunich, @TU_Muenchen) and Paris Noah’s Ark Lab (@Huawei). Thank you to my collaborators @QBouniot, @Vasya2104, @IevgenRedko, and particularly to my advisor @zeynepakata for guiding me through my first PhD project!
1
1
6
@SimonRoschmann
Simon Roschmann
2 months
TiViT is on par with TSFMs (Mantis, Moment) on the UEA benchmark and significantly outperforms them on the UCR benchmark. The representations of TiViT and TSFMs are complementary; their combination yields SOTA classification results among foundation models.
Tweet media one
1
0
4
@SimonRoschmann
Simon Roschmann
2 months
We further explore the structure of TiViT representations and find that intermediate layers with high intrinsic dimension are the most effective for time series classification.
Tweet media one
1
0
3
@SimonRoschmann
Simon Roschmann
2 months
Time Series Transformers typically rely on 1D patching. We show theoretically that the 2D patching applied in TiViT can increase the number of label-relevant tokens and reduce the sample complexity.
1
0
3
@SimonRoschmann
Simon Roschmann
2 months
Our Time Vision Transformer (TiViT) converts a time series into a grayscale image, applies 2D patching, and utilizes a pretrained frozen ViT for feature extraction. We average the representations from a specific hidden layer and only train a linear classifier.
Tweet media one
1
0
4
@SimonRoschmann
Simon Roschmann
2 months
How can we circumvent data scarcity in the time series domain? We propose to leverage pretrained ViTs (e.g., CLIP, DINOv2) for time series classification and outperform time series foundation models (TSFMs). 📄 Preprint: https://t.co/3at0eaVsEL 💻 Code: https://t.co/GbmqJUuQyI
Tweet media one
5
21
62
@zeynepakata
Zeynep Akata
3 months
It is a great honor to receive the ZukunftsWissen Prize 2025 from the German Academy of the Sciences @Leopoldina with generous support of the @CoBaStiftung 🎉 This achievement wouldn’t have been possible without my wonderful group @ExplainableML @TU_Muenchen @HelmholtzMunich
@Leopoldina
Nationale Akademie der Wissenschaften Leopoldina
3 months
🎉Der „ZukunftsWissen“-Preis 2025 von Leopoldina und @CoBaStiftung geht an @zeynepakata @TU_Muenchen/@HelmholtzMunich. Die Informatikerin erhält die mit 50.000 € dotierte Auszeichnung für ihre Forschung zur erklärbaren #KI. https://t.co/cnTA7ky7jx
Tweet media one
Tweet media two
6
6
138
@ExplainableML
Explainable Machine Learning
3 months
📢 Landed in Nashville🎺 for #CVPR2025! The EML group is presenting 4 exciting papers — come say hi at our poster sessions! More details in the thread — see you there! 🏁🌟
Tweet media one
1
4
12
@ExplainableML
Explainable Machine Learning
4 months
#CVPR2025 is heading to the 'Music City' — Nashville! 🎺 Join us from June 11–15. We're thrilled to announce that we'll be presenting four papers at @CVPR! Check out the thread below for highlights, and feel free to stop by and chat with our authors! 📷👇
1
5
21
@ExplainableML
Explainable Machine Learning
5 months
🌏 #ICLR2025 is coming to the beautiful city of Singapore — April 24–28! We're excited to share 4 upcoming papers being presented at the conference. Check out the thread for highlights, and come chat with the authors if you're interested! 🧵👇
1
8
18
@SimonRoschmann
Simon Roschmann
2 years
I worked on this project as a student research assistant at @FraunhoferAISEC together with @shabaaash, Nicolas Müller, Philip Sperl and Konstantin Böttinger. Huge thanks to my collaborators and the institute!
1
0
2
@SimonRoschmann
Simon Roschmann
2 years
We leverage the disentanglement of features in the latent space of VAEs to reveal feature-target correlations in image and audio datasets and evaluate them for shortcuts.
Tweet media one
1
0
1
@SimonRoschmann
Simon Roschmann
2 years
For real-world applications of machine learning, it is essential that models make predictions based on well-generalizing features rather than spurious correlations (shortcuts) in the data.
1
0
2
@SimonRoschmann
Simon Roschmann
2 years
Excited to share that our paper "Shortcut Detection with Variational Autoencoders" has been accepted at the #ICML Workshop on Spurious Correlations, Invariance and Stability. 📄 Paper: https://t.co/dKNbizfuab 🖥 Code:
1
1
3
@FraunhoferAISEC
Fraunhofer AISEC
4 years
Das @BSI_Bund erklärt, wie sich mithilfe von #KI Gesichter, Stimmen und Texte täuschend echt fälschen lassen und greift dabei auf #Deepfake-Demonstratoren des @FraunhoferAISEC zurück. Wir bringen Mensch und Maschine bei, Fälschungen zu erkennen.
@BSI_Bund
BSI
4 years
#Deepfakes bauen auf Künstlicher Intelligenz auf und können dafür verwendet werden, digitale Medien gezielt zu manipulieren. Auf unserer neuen Themenseite „Deepfakes - Gefahren und Gegenmaßnahmen“ gibt es detaillierte Infos: https://t.co/Qsbac0Xs7U #DeutschlandDigitalSicherBSI
Tweet media one
0
5
7
@FraunhoferAISEC
Fraunhofer AISEC
4 years
#DeepFake made by @FraunhoferAISEC: Zu Forschungszwecken haben unsere Kolleg*innen mithilfe von #KI ein manipuliertes Video erstellt, das den Reifegrad von Deep Fakes bei der Fälschung von Medieninhalten illustriert. Hier geht's zum Video: https://t.co/sAHYRdLQ35
Tweet media one
0
2
4