Lakera AI
@LakeraAI
Followers
2K
Following
404
Media
256
Statuses
546
Customers rely on Lakera for real-time security that doesn’t slow down their GenAI applications.
San Francisco, United States
Joined December 2020
🧠 Think you can break an AI? Gandalf: Agent Breaker is live. Real-world GenAI fails—phishing, tool abuse, more. 🧩 Outsmart the AI. Start 👉 https://t.co/iu8r5jIYlB
4
6
13
Indirect Prompt Injection hides inside the data AI systems ingest. A poisoned PDF or web page can activate instructions the moment the model reads it. We break down the lifecycle, real attacks, and the controls that reduce the risk. https://t.co/trGsyRkEsa
0
0
0
NEW GANDALF LEVELS JUST DROPPED LFG!! 🧙♂️🎉🍻
🧵🧙♂️ New Gandalf levels are out! I'm glad to introduce a new version of our prompt injection game -- Gandalf: Agent Breaker. You can hack 10+ AI agents and climb the leaderboard, and learn about real-world vulnerabilities!🪄 Try out the challenge at: https://t.co/cZepDHjvSB
3
7
68
🧙♂️ GANDALF x BASI 🐉 The beloved prompt injection game, Gandalf by @LakeraAI, has become a rite of passage for AI red teamers around the world and inspired many a jailbreaker to enter the fray of LLM spellcraft 🪄 They've been cooking up something brand new (I got a sneak peak
9
21
139
@LakeraAI released an agentic CTF version of Gandalf It’s got 10 different agent challenges including - prompt injection - memory tampering - tool abuse At OWASP we built FinBot, an agentic CTF, as part of the Agentic Security Initiative. Fun way to learn about AI Security!
1
1
1
"All untrusted third-party data is now executable malware.” @SamuelDWatts of @LakeraAI discusses the challenges of securing LLM deployments against vulnerabilities like prompt injections and jailbreaks, especially in an evolving threat landscape.
19
101
909
Hosting a security-themed demo night with @_ai_collective and @EarlybirdVC on the 23rd of July in London featuring @LakeraAI @HarryWetherald @AISecurityInst. Engineers from @cohere @Synthesia @windsurf_ai @instadeepai @Meta already have signed up 👀 https://t.co/BK2YWxBthx
luma.com
The AI Collective is back for its third demo night in London! This is the event where founders, builders, and dreamers bring their biggest ideas to life!…
1
4
21
🇨🇭 Switzerland leads the world in AI patents per capita—and precision matters. We're proud to be featured in the Swiss Deep Tech Report 2025 as a standout GenAI company 🧠 🗞️ Page 24: Lakera's building real-world secure AI. 🔗 Report: https://t.co/Pvwo2n9Hld
#AIsecurity
deeptechnation.ch
Discover insights in the Swiss Deep Tech Report 2025: growth in funding, top AI talent density, and high VC share globally for Switzerland.
0
0
5
🧩 Prompt-based evals ≠ real-world security. Attackers adapt. Static tests don’t. ⚠️ The Lakera Model Risk Index simulates live threats and scores models by how well they hold the line. See where your model stands: https://t.co/Eg9rEiG7iz
#GenAISecurity #LLMSecurity
0
1
6
#𝟱𝟭 on the list. 🛡️#𝟭 in securing AI apps. Lakera made it to Sifted’s B2B SaaS Rising 100 — spotlighting the top startups shaping the future of enterprise software. We’re the first GenAI security company on the list. Let’s go! 💥 #GenAI #LLMSecurity #AISecurity #Lakera
0
0
3
The Lakera AI Model Risk Index is here. The first runtime benchmark for LLM threats—measuring how models hold up under real-world attacks. Jailbreaks, RAG exploits, risk scores. Let’s raise the bar on GenAI security. 👉
lakera.ai
Discover how the Lakera AI Model Risk Index provides a real-world security benchmark for LLMs, offering quantified risk assessment across adversarial threats.
0
3
5
Thrilled to launch support for adding Guardrails on @LiteLLM UI This release brings support for adding Microsoft Presidio, AWS Bedrock Guardrails, @ProtectAICorp LLM Guard Endpoints, AIM Guardrails, @LakeraAI Guardrails on LiteLLM
1
2
4
Would you trust an AI agent to make critical decisions? 🤔 AI systems are becoming more autonomous—but with that comes new security risks. We break down the threats + solutions with Mateo Rojas-Carulla, Co-founder of @LakeraAI in our latest podcast ep https://t.co/gwAscG9OYr
0
1
2
#AI adoption is exploding—but so are the #cybersecurityrisks. In this episode, Mateo Rojas Carulla discusses how #vulnerabilities like #promptinjectionattacks are redefining #security. Tune in for actionable advice on securing $AIsystems in industries like healthcare and finance.
2
4
5
🎁 New Guide: Build AI Security Awareness with Gandalf! 🔒 Learn about AI vulnerabilities 🎮 Test red-teaming strategies 🛡️ Understand layered defenses 🎄 Download now and level up your AI security skills:
lakera.ai
Discover AI vulnerabilities and defenses with this hands-on guide. Explore real-world examples, red-teaming techniques, and practical tips to secure generative AI systems.
2
1
5
🚨 AI & Cybersecurity: What’s Changing? Lakera’s co-founder, Mateo Rojas-Carulla, joins Joe Colantonio to explore: 🔹 New threats like prompt injection attacks 🔹 How LLMs are reshaping security 🎧 Watch now: https://t.co/LGdSCUwoR6
#AI #Cybersecurity #Lakera
2
1
2
🎮 An AI agent with one rule—“Don’t transfer money”—was tricked. Participants paid to prompt it into releasing $50K. Each failed attempt grew the pot, until someone cracked it. A wild example of why AI security matters. 👉 Learn more:
freysa.ai
Enabling sovereign AI and self-owned cognition at global scale.
0
0
7
🚨 Building AI without security in mind? Risky move. Our AI Security for Product Teams Handbook helps you secure GenAI products from the start. 👉 Best practices 👉 Key risks & regulations 👉 Tools to protect your apps 📥 Download now: https://t.co/7iDNBJWVfk
#AIsecurity
lakera.ai
0
0
1
🚨 AI Security Webinar: Year in Review 🚨 🗓️ Dec 5, 9:00 AM PT Join experts from Lakera, Dropbox, Scale AI & more to: 👉 Unpack 2024’s top AI security challenges 👉 Explore real-world success stories 👉 Predict 2025 trends 📍 Register now: https://t.co/6NKkQPYDMz
#AIsecurity
lakera.ai
Explore 2024’s major AI security developments, insights from Lakera’s AI Security Readiness Report, and strategic predictions for 2025. Gain actionable insights to address emerging AI-specific...
0
2
3