Brian Vermeer
@BrianVerm
Followers
7K
Following
9K
Media
958
Statuses
6K
Java Champion | Staff Dev🥑 at @Snyksec | JUG leader @NLJug & @virtualJug | Java | Security | Dutch Air Reserve | Taekwondo Master | Views are my own
Breda, Nederland
Joined October 2015
Enjoying a great evening with splendid talks by @BrianVerm, @bjschrijver delicious food, and amazing company. Thank you @jetbrains for the giveaways. Big thanks to Axxes IT Consultancy Netherlands for hosting us. Till the next one! #utrecht #jug #java #community
0
3
6
Pro tip for everyone submitting to tech conferences. Include a video of a previous talk or create one. Delivery is just as important as the topic. Don't assume the program committee is familiar with you. You need to convince the PC that your submission is good enough
1
2
6
🛡️ Understanding Prompt Injection with @BrianVerm Learn how attackers manipulate LLMs - and how to guard your AI from the tricksters of prompt injection ⚔️ 👉 https://t.co/NHlu5rghf4
#Jfokus #DeveloperConference #AI #Security #PromptInjection #LLM #CyberSecurity #Java
0
2
2
If you are seeking an exceptional Product Manager in tech, this is your opportunity. Estelle effectively bridges product vision and engineering, making her a valuable asset to any product team.
Wrapping up a great product mission in healthtech, now looking for a newt Product Manager job where strategy, experimentation & user needs meet.🚀 Feel free to share my post or reach out if something comes to mind 💬 Ready for my next adventure 🏁
1
0
2
My latest talk at @Devoxx , Understanding Prompt Injection: Techniques, Challenges, and Advanced Escalation, explores how attackers use natural language to outsmart both humans and machines. https://t.co/MPSDRr7tUV
0
5
18
Back from @Devoxx. My talk “Understanding Prompt Injection” briefly had the top spot at, very briefly. Nevertheless very happy with turnout in the room and the final score. If you joined, thank you so much.
2
0
12
#VerboseMode at #Dev2Next 2025 with Brian Vermeer on securing your existing and legacy applications
1
2
2
📢 @BeJUG Meetup! 🎤 Talks Lessons from the Oregon Trail for the Secure Software Supply Chain – @KadiGrigg Breaching LLM Powered Applications – @BrianVerm 🍕 Food, 🍻 drinks & top-notch Java/security insights! 🗓 Sept 26 ⏰ 6 PM 🔗 https://t.co/EG57rOAuRV
0
1
5
Managing an AI agent memory may seem simple, but open to security risks... https://t.co/P8PybZe1k9 This is a great video walk through by @BrianVerm on how LLM chat/memory works in general but also how it can be abused 👇
0
1
2
An LLM’s memory can be an attack surface 🧠 If manipulated, it can make the model behave in unsafe ways. Learn why this matters 👉 https://t.co/aknkNzIZX8
#AI #AppSec #Memory
0
0
0
How chat memory manipulation can ruin your AI system https://t.co/umg4LuV7yq
snyk.io
Discover how chat memory manipulation can disrupt AI performance, lead to data drift, and compromise user trust. Learn key risks, real-world examples, and how to safeguard your AI systems.
0
0
1
Time for a next @BeJUG Event! Many thanks to @cegeka for hosting this event in Hasselt! In this Belgian Java User Group, we have @BrianVerm and @KadiGrigg who'll help us make (or break!) some software. Sign-up here 👉 https://t.co/t4ABZv1Lae
#java #security
meetup.com
Hope you all had a great summer so far, so let's end it with another event! This time we're heading to **Cegeka**, in Hasselt. Thanks for hosting us! This time we have Kad
0
3
5
Excited that I'll be speaking at the @AI4Devs event in Amsterdam on September 19th! I'll discuss security and privacy challenges in LLM-powered apps, including issues like prompt injection and data abuse, and offer solutions. Hope to see you there! https://t.co/JQ8fz8CZ53
0
0
1
On my way to @SunnyTech_MTP in Montpellier 🇫🇷 I am thrilled to deliver the closing keynote on how to securely build LLM-powered applications. If we include LLMs in our applications and build agents, what could possibly go wrong? Spoiler alert: a lot of work is left for us to do.
[KEYNOTE] Après 2 journées intensives, on fermera la conférence avec une keynote de Brian Vermeer sur les risques liés à l'accès aux données par les LLMs avec notamment la bonne vieille injection (de prompt), l'abus des données privées pour le training, etc. 🇬🇧 En anglais 🇬🇧
0
0
3
Prompt injection is quickly becoming one of the key security challenges in the world of AI and LLMs. We just published a new article that explores prompt injection, the different types of attacks, and why it’s such a complex issue to solve. https://t.co/IxFxCiRAiU
snyk.io
A prompt injection attack is a GenAI security threat where an attacker deliberately crafts and inputs deceptive text into a large language model (LLM) to manipulate its outputs.
0
0
1
Our talk on Securing LLM-powered applications from @DevoxxFR is live. In this talk, @LizeRaes and I will break LLMs and give you pointers on how to solve this. https://t.co/lAI6SBJkDl
0
0
2
1
4
19
LLMs are powerful, but safety is key! @BrianVerm explains the importance of LLM guardrails for reliable and secure #AI interactions. Read it now. 👇
snyk.io
Explore LLM guardrails, why they matter, and how you can effectively implement them to ensure safe and trustworthy AI interactions.
0
1
1