Eoin Wickens
@enwckns
Followers
220
Following
2K
Media
5
Statuses
279
Security for AI @ HiddenLayer All words are, well, they're just, like, my opinion, man.
Ireland
Joined June 2016
Thank you, Mihai 🙌 There is a lot we can do in the short term that will have many long term benefits.
All of this has happened before (vulnerabilities, lack of auth/autz, data leaks) All of this is happening again (ML security issues) I really recommend @enwckns 's keynote at SCORED 24 about all the security issues found in ML and what we can do to not get to a bleak future
0
0
6
All of this has happened before (vulnerabilities, lack of auth/autz, data leaks) All of this is happening again (ML security issues) I really recommend @enwckns 's keynote at SCORED 24 about all the security issues found in ML and what we can do to not get to a bleak future
2
2
7
Our latest research highlights that even well-intentioned solutions can have vulnerabilities. We found that the watermarking service used by AWS to combat misinformation in digital content generated by its Titan AI model had a vulnerability. Read more 👉 https://t.co/J9nrauPbSf
0
2
4
i was pretty bummed to miss @labscon_io actual this year on account of ❤️🩹🇦🇺 things #iykyk …but then this showed up out of the blue yesterday 🤩🥹🙏
2
3
37
✍️ #LABScon24 workshop The AI-talian Job: Hands-on attacks on AI Systems - by Travis Smith, Eoin Wickens (HiddenLayer) @MrTrav @enwckns @hiddenlayersec
https://t.co/5K6pwLKg8S
https://t.co/lx4dvoHPGf
labscon.io
0
1
3
Model storage under attack ( https://t.co/gFgDfQqqkE). Models are uninspectable, so the only solution to prevent tampering is to sign them. OpenSSF has a model signing SIG as part of the AI/ML WG. Both biweekly meetings are in the OpenSSF calendar. Also,
github.com
Supply chain security for ML. Contribute to sigstore/model-transparency development by creating an account on GitHub.
1
18
34
The biggest company in the world and global leader in AI uses HiddenLayer for security. @Microsoft @hiddenlayersec
0
2
2
Our SAI team uncovered a #0day deserialization vulnerability in the popular statistical programming language R, widely used within #government and #MedicalResearch. This could be used as part of a #supplychainattack. Learn more 👇 https://t.co/sqYhZDdhE5
#Security4AI
hiddenlayer.com
HiddenLayer uncovered a zero-day deserialization vulnerability in the popular programming language R, widely used within government and medical research that could result in a supply chain attack.
0
3
7
Very nice work from @Abraxus7331 and @KieranEvans89 in discovering CVE-2024-27322, a vulnerability in R's deserialization library that can lead to "R-bitrary" code execution when deserializing untrusted data. https://t.co/6zmEpywfTD
hiddenlayer.com
HiddenLayer uncovered a zero-day deserialization vulnerability in the popular programming language R, widely used within government and medical research that could result in a supply chain attack.
1
6
15
(a) this is fascinating (b) I hate to think how messed up science is going to get as people use LLMs for things they really shouldn’t, which evidently includes any kind of random sampling.
98
580
6K
We're thrilled to have @mvjanus & @enwckns returning to #BSides SF this year. Make sure you catch their new presentation on 5/5, "Insane in the Supply Chain: Threat modeling for attacks on AI Systems." 🎬 https://t.co/3XzPZkDfJd Our full #RSAC schedule 👉
0
1
3
AI Village is back for DEF CON 32! We're looking for talks on all things ML + Security, but this year we're getting small! "Smart" devices, AVs, on-device facial recognition, and more! Show us how you broke them! Submission deadline is 12-May-2024!
1
28
49
Great talk by Marta Janus on supply chain attacks using machine learning models @CanSecWest @hiddenlayersec
0
1
3
🚀 Product Launch: Introducing HiddenLayer's AI Detection & Response for Generative AI. We're thrilled to bring this new capability to our award-winning platform, extending our end-to-end security to orgs deploying LLM-based applications 📄 https://t.co/nBttfOBZpc
#genai #LLM
0
5
7
🤖 Security researchers have uncovered a new #vulnerability in Hugging Face's Safetensors conversion service that could lead to supply chain attacks, compromising user-submitted models. Read details: https://t.co/93WZx7DRnP
#cybersecurity #hacking #technews
thehackernews.com
Hugging Face vulnerability allows attackers to hijack machine learning models.
1
33
62
In our latest publication, @enwckns & Kasimir Schulz show how an attacker could send malicious pull requests to any repository on Hugging Face by hijacking the Safetensors conversion bot — with a single malicious model, the conversion service can be compromised.
0
7
10
📅 SAVE THE DATE: HiddenLayer’s 2024 AI Threat Landscape Report will be released on March 6th. We're excited to have @enwckns, our Technical Research Director and one of the authors of our 2024 AI Threat Landscape Report, on the webinar. Pre-register 👉 https://t.co/7jSPYZRwRB
0
1
3
Our researchers discovered that the Hugging Face PyTorch to Safetensors conversion service could easily be compromised by attackers, who could tamper with models and leak the token used to create pull requests from the official bot. https://t.co/W9gc9bHEAE
hiddenlayer.com
In this blog, we show how an attacker could compromise the Hugging Face Safetensors conversion space and its associated service bot.
0
11
18
📅 SAVE THE DATE: HiddenLayer’s 2024 AI Threat Landscape Report will be released on March 6th. Sign up to be the first to preview the report & join us in a webinar discussion as we share some of the report’s most important findings 👉 https://t.co/MlwpyrpJAF
#Security4AI
hiddenlayer.com
As we navigate an AI-driven era, we developed this report as a practical guide to understanding the Security for AI landscape and to provide actionable steps to implement security measures at your...
0
5
6
Great detection rules are about hitting a "sweet spot" that is somewhere before the point of diminishing returns, after which a rule can become "overfit" and functionally no better than a hash. #100daysofYARA
2
6
41