Konrad Rieck π
@mlsec
Followers
3K
Following
5K
Media
215
Statuses
6K
Machine Learning and Security, Professor of Computer Science at TU Berlin, On Bluesky: @rieck.mlsec.org
Berlin, Deutschland
Joined December 2009
What a great honor and recognition to receive the @IEEESSP Test of Time Award for our work on code property graphs. Thank you all so much! ππ https://t.co/SonYp3cSKW
@bifoldberlin @TUBerlin A quick throwback to our paper's journey ... 1/6
10
13
102
The lesson is clear: combining data from different sources and relying on AI creates a new attack surface. We need to fix this before AI weather forecasts become the norm. π Paper: https://t.co/jU50lwwiZo π§³ Code: https://t.co/c1hKLBJnLm π€ Distinguished Paper Award at CCS 4/4
github.com
This repository belongs to the publication: Adversarial Observations in Weather Forecasting - mlsec-group/adversarial-observations
0
0
0
Our attack injects tiny perturbations into the measurements that cause GenCast, the currently best AI weather model by Google, to predict false extreme events. The required changes are so low that they fall within the natural noise of observations and are hard to detect. 3/4
1
0
1
Some background: Current weather forecasts largely rely on observations from satellites π°οΈ. Around 100 of them orbit Earth, operated by different countries. We find that compromising just one is enough to fabricate extreme events anywhere on the planet π. 2/4
1
0
1
AI predicts rain. We predict trouble! Today, Erik presents a novel attack on Google's latest AI weather model at @acm_ccs. By changing only 0.1% of the observations, the attack can fabricate or suppress the prediction of extreme events, from hurricanes π to heat waves π₯ 1/4
1
0
3
LLM-based Vulnerability Discovery - https://t.co/dV5gjDA47m Our investigation leads to a disappointing outcome: despite the impressive capabilities of language models in other domains, their performance in vulnerability discovery is not significantly different from that of a
0
1
7
4οΈβ£ PET-ARENA: How private is private enough? Probe privacy-preserving DB systems through real-world attacks and red-teaming missions. π https://t.co/juuMFBDZQF π§΅5/5
0
0
0
3οΈβ£ AgentCTF: Agents under attack! Red-team or defend autonomous systems in adversarial playgrounds. π https://t.co/N2H1DwvTXd π§΅4/5
1
0
0
2οΈβ£ Anti-BAD: Backdoored LLMs ahead! Defend against stealthy manipulations in post-trained models. π https://t.co/SGlS0HUsgt π§΅3/5
anti-bad.github.io
1
0
0
1οΈβ£ Space-AI Manipulation: Can you spot sabotage in orbit? Detect hidden triggers and tampered outputs in AI systems powering space operations. π https://t.co/MrhL9Vsde6 π§΅2/5
1
0
0
Weβre excited to announce this yearβs competitions for @satml_conf π Get ready for four challenges tackling AI in space, backdoors in LLMs, CTF agents, and privacy-preserving databases. https://t.co/BTk2yKPbBG Letβs dive in! π§΅1/5
1
2
6
Reminder: SaTML is a fantastic venue for research in trustworthy ML, whose deadline is in the next week. If your nice paper was rejected from #NeurIPS2025, consider sending it to SaTML for a thoughtful review process instead of rolling the dice again
Did AI folks not value your security insights or vice versa? Maybe youβre submitting your papers to the wrong conference. @satml_conf has you covered! We are eager to read your work on the security, privacy, and fairness of AI. π https://t.co/RFSbXORci6 β° Deadline: Sep 24
0
3
21
Did AI folks not value your security insights or vice versa? Maybe youβre submitting your papers to the wrong conference. @satml_conf has you covered! We are eager to read your work on the security, privacy, and fairness of AI. π https://t.co/RFSbXORci6 β° Deadline: Sep 24
0
4
9
Got some hot research cooking? π₯ The @satml_conf paper deadline is just 9 days away. We are looking forward to your work on security, privacy, and fairness in machine learning. π https://t.co/cPFitltvjA β° Sep 24
0
7
17
Three weeks to go until the SaTML 2026 deadline! β° We look forward to your work on security, privacy, and fairness in AI. ποΈ Deadline: Sept 24, 2025 We have also updated our Call for Papers with a statement on LLM usage. Check it out π https://t.co/RFSbXORci6
@satml_conf
0
7
21
π£ Researchers in AI security, privacy & fairness: It's time to share your latest work! The SaTML 2026 submission site is live π https://t.co/q6eSJ4y26E ποΈ Deadline: Sept 24, 2025 @satml_conf
0
11
27
π¨ Got a great idea for an AI + Security competition? @satml_conf is now accepting proposals for its Competition Track! Showcase your challenge and engage the community. π https://t.co/3g3nvv3yqa ποΈ Deadline: Aug 6
0
14
31
Some aspects of AI discourse seem to come from a different planet, oblivious to basic realities on Earth. AI for science is one such area. In this new essay, @sayashk and I argue that visions of accelerating science through AI should be considered unserious if they don't confront
normaltech.ai
Confronting the production-progress paradox
17
72
253
0
1
4
This work emerged from a spontaneous collaboration with the group of @matthiasboehm7 at our institute @bifoldberlin and @CASA_EXC If you'd like to learn more, check out our paper: https://t.co/I41Hr3hjn7 Code for crafting your own Chimera examples will follow soon. 4/4
1
0
3