PreambleAI Profile Banner
Preamble Profile
Preamble

@PreambleAI

Followers
714
Following
679
Media
50
Statuses
246

AI security and compliance solutions for generative AI systems. The team that discovered prompt injections in GPT-3.

Pittsburgh, PA
Joined January 2021
Don't wanna be here? Send us removal request.
@PreambleAI
Preamble
3 months
Excited to share - Prompt Injection 2.0: Hybrid AI Threats, on arXiv! 3 yrs post-discovery, prompt injections keep evolving. Also open-sourcing Prompt Injector, a toolkit to create and test prompt injections. Paper: https://t.co/jH8C6lTi6n Code: https://t.co/E6YlZjWpBp
0
2
11
@jer_mchugh
Jeremy McHugh, DSc.
7 days
Researchers from @TenableSecurity discovered what they called the “Gemini Trifecta” showing how Google's AI assistants can be exploited. Three now-patched flaws in Google Gemini: • Cloud Assist: log-to-prompt injection via attacker-controlled log fields. • Search
0
1
2
@jer_mchugh
Jeremy McHugh, DSc.
25 days
Probably one of the most impactful prompt injections I’ve seen so far.
@AISecHub
AISecHub
25 days
ShadowLeak: A Zero-Click, Service-Side Attack Exfiltrating Sensitive Data Using ChatGPT’s Deep Research Agent - https://t.co/NarKCxQUz8 - We found a zero-click flaw in ChatGPT's Deep Research agent when connected to Gmail and browsing: A single crafted email quietly makes the
0
1
4
@PreambleAI
Preamble
28 days
We hope they have a solid defense for prompt injections.
@jer_mchugh
Jeremy McHugh, DSc.
1 month
0
0
1
@PreambleAI
Preamble
1 month
We’ve been tracking this threat since responsibly disclosing it in 2022 to OpenAI & our patents reflect our continued work on sophisticated mitigation methods. It's a reminder that AI security is about always staying vigilant.
@jer_mchugh
Jeremy McHugh, DSc.
1 month
This interview on prompt injections is worth a listen. I completely agree with @KeithHoodletToB, if you are not logging and monitoring your AI systems, you're missing the attacks happening. If prompt injections do not have a permanent fix, then it will always be a threat to track
0
0
1
@jer_mchugh
Jeremy McHugh, DSc.
1 month
This research validates the fact that AI agents are great for malicious use. When defending against the misuse of AI (besides the USB cable injection), the ability for an agent to live off the land, blurs the line for malicious & normal AI behavior. I can see how this attack
@PalisadeAI
Palisade Research
1 month
🔌 We built an autonomous AI hacker that hides inside a USB cable. Plug a cable in, and the agent starts exploring the machine—browsing files, mapping connections, and quietly pulling data—while attackers watch live from a dashboard.
1
1
5
@jer_mchugh
Jeremy McHugh, DSc.
2 months
I spoke at a conference in May about this topic and how AI agents could be used to deploy ransomware. The talk focused on defending against ransomware and AI-powered threats. Following Black Hat this year, it appears that nearly every cybersecurity company is now leveraging AI
@TheHackersNews
The Hacker News
2 months
🚨 AI-powered ransomware is here. Researchers just uncovered PromptLock—ransomware strain that uses OpenAI’s new gpt-oss:20b model to write unique attack scripts on every run. ◉ Cross-platform: Windows, Linux, macOS. ◉ Harder to spot. Harder to stop. ◉ For now, it’s “just” a
0
2
5
@jer_mchugh
Jeremy McHugh, DSc.
2 months
These new AI browsers seem like an easy target for indirect prompt injections. Whenever I get access to Perplexity Comet, I’ll have to experiment.
@zack_overflow
zack (in SF)
2 months
Why is no one talking about this? This is why I don't use an AI browser You can literally get prompt injected and your bank account drained by doomscrolling on reddit:
0
1
3
@jer_mchugh
Jeremy McHugh, DSc.
2 months
Prompt injections continue to plague AI systems. These attacks are not even sophisticated yet
@brave
Brave
2 months
One example attack: 1. A Comet user sees a Reddit thread where one comment has hidden instructions. 2. The user asks Comet to summarize the thread. 3. Comet follows the malicious instructions to find the user's Perplexity login details and send them to the attacker.
0
1
3
@jer_mchugh
Jeremy McHugh, DSc.
2 months
🚩🚩 Seems like a nice app, but I would highly discourage even installing the app until major changes are made to the system access and data privacy. Qoder's ToS/Privacy Policy red flags: - Perpetual, irrevocable rights to ALL your code (ToS 5.1) - Can access/modify ANY system
@testingcatalog
TestingCatalog News 🗞
2 months
BREAKING 🚨: Alibaba released Qoder - a new AI Code Editor which is now available for Free during the preview. phase. A future Pro price is set as TBD. Testing time 👀
2
5
18
@jer_mchugh
Jeremy McHugh, DSc.
2 months
I wouldn’t be surprised prompt injections could exploit other AI developer tools.
@wunderwuzzi23
Johann Rehberger
2 months
👉 Episode 21: Hijacking Windsurf How Prompt Injection Leaks Developer Secrets The agent cannot protect your private code or secrets and can send it to third-party servers when under attack from untrusted data - there are multiple exploit chains...
1
1
5
@jer_mchugh
Jeremy McHugh, DSc.
2 months
It’s great to see the frontier labs involving the broader community for red teaming. It’s also a great time to try out our open source prompt injector tool to join the challenge. Just download the model from Ollama and get started.
@OpenAI
OpenAI
2 months
We’re launching a $500K Red Teaming Challenge to strengthen open source safety. Researchers, developers, and enthusiasts worldwide are invited to help uncover novel risks—judged by experts from OpenAI and other leading labs. https://t.co/EQfmJ39NZD
0
1
2
@PreambleAI
Preamble
3 months
AI agents = more vulnerable attack surface. Planning to deploy AI agents across operations? You need robust security + AI threat intelligence. Many see AI agents as a perfect solution, but AI attacks can erase savings and cost more in damages. How are you securing your AI?
0
0
2
@PreambleAI
Preamble
3 months
Prompt injections objectively pose a massive threat to AI agents! A new study by @GraySwanAI & @AISecurityInst reviewed 1.8M attacks on 22 frontier models in 44 real-world scenarios. Result: 100% policy violation rate with prompt injections
Tweet card summary image
arxiv.org
Recent advances have enabled LLM-powered AI agents to autonomously execute complex tasks by combining language model reasoning with tools, memory, and web access. But can these systems be trusted...
1
1
5
@PreambleAI
Preamble
3 months
Last week, Preamble's Prompt Injection 2.0 paper & open-source toolkit showed LLM vulnerabilities turning into agentic attacks as dev outpaces security. In a chat w/ Fed Vice Chair Bowman, @OpenAI CEO @sama shares concerns about prompt injections. Watch:
0
1
4
@PreambleAI
Preamble
3 months
The question isn't whether AI security will become foundational, it's when and how. Organizations that start building AI security capabilities now will have a significant advantage over those who wait. What's your organization's AI security maturity level? 5/5
0
0
1
@PreambleAI
Preamble
3 months
Key difference: AI systems are more complex than traditional software. They're non-deterministic, context-sensitive, and reasoning-based. This creates new challenges that existing security approaches can't fully address. 4/5
1
0
1
@PreambleAI
Preamble
3 months
The cybersecurity industry evolved through: • Standardized testing frameworks • Security-first development practices • Regulatory requirements • Professional certifications • Specialized tools and platforms 3/5 AI security is already following a similar path.
1
0
1
@PreambleAI
Preamble
3 months
In the early 2000s, many organizations: • Deployed software without security testing • Relied on perimeter defenses alone • Treated security as a cost center • Had few security specialists Sound familiar to current AI deployments? 2/5
1
0
1
@PreambleAI
Preamble
3 months
AI security is like cybersecurity in the early 2000s. Back then, security was often an afterthought. Today, it's foundational to everything we build. Are we at a similar inflection point with AI security? 🧵1/5
1
0
1