AISecHub
@AISecHub
Followers
3K
Following
1K
Media
653
Statuses
1K
🚀 AISecHub | AI & Cybersecurity | Securing AI systems, and sharing insights on emerging challenges 💡sponsored by https://t.co/tdadZWEpuf
Singapore
Joined December 2024
Adversarial Attacks and Defenses: Deep Reinforcement Learning 1. We propose a classification framework for adversarial attack methods against DRL models based on perturbation types and attack targets. 2. Leveraging this framework, we comprehensively review existing
0
1
11
Attack Strategies for LLM Web Agent Red-Teaming In this work, we presented Genesis, a web agent red-teaming framework that systematically discovers, summarizes, and evolves reusable attack strategies. By introducing the genetic algorithm with a hybrid strategy representation
0
3
14
OpenAI Atlas Omnibox Prompt Injection: URLs That Chat - https://t.co/IKx2YMxE1I - @NeuralTrustAI In OpenAI Atlas, the omnibox (combined address/search bar) interprets input either as a URL to navigate to, or as a natural-language command to the agent. We’ve identified a prompt
neuraltrust.ai
NeuralTrust research shows how using crafted strings that resemble URLs, an attacker can override user intent and jailbreak agentic browsers like OpenAI Atlas.
0
0
3
Top 6 MCP Vulnerabilities (and How to Fix Them) - https://t.co/yZXw7EOPTB
@omercnet at @descopeinc
#AIsecurity #MCP #AgentSecurity #PromptInjection #SessionHijacking #ToolingSecurity #Mitigation #ProtocolSecurity #LLM #Hardening #descopeinc #descope
descope.com
Hasty MCP implementation without guardrails can cause security risks. Learn about six common MCP vulnerabilities and actionable tips to protect against them.
0
1
8
⚠️ Heads up!!! Big warning for HP AI Devices! ⚠️ Some of HP’s latest Next Gen AI PCs, including the EliteBook X Flip G1i, are getting the updated OneAgent 1.2.50.9581 build. That version seems to run a cleanup script removing any certificate containing “1E” in its subject ....
26
76
299
How a fake AI recruiter delivers five staged malware disguised as a dream job #BeaverTail #Lazarus
https://t.co/FE8STf5m6H
BeaverTail, OtterCookie and InvisibleFerret functional overlaps #Lazarus P1: Cryptocurrency modules targeted by OtterCookie. P2: Targeted BeaverTail cryptocurrency browser extensions. https://t.co/jCGw2v5wNo
0
16
64
For Cyber Security Awareness Month at IBM, I delivered an internal cross-org presentation on benchmarking SOTA LLM models using some CTF problems I created this year. I also shared some of my learnings from working extensively on agents 🖤⚔️ Slides => https://t.co/7Y2iYXsrdG
5
18
58
CVE-2025-6515: Prompt Hijacking in MCP ecosystems - https://t.co/b75l2qyi8J by @JFrog, @JFrogSecurity JFrog Security Research recently discovered and disclosed multiple CVEs in oatpp-mcp – the Oat++ framework’s implementation of Anthropic’s Model Context Protocol (MCP) standard.
jfrog.com
Enhance application security by preventing “Prompt Hijacking” attacks based on CVE-2025-6515 and their ability to introduce malicious code into development environments.
0
0
6
The New Reality of Threat Modeling in AI-Generated Systems - https://t.co/rqMhmY9agA
#ThreatModeling #AIGeneratedCode #SupplyChain #DataPoisoning #OWASPTop10 #MITREATLAS #PromptInjection #ModelIntegrity #SBOM
securityreview.ai
Your developers are shipping AI-generated code faster than ever, thanks to tools like GitHub Copilot and ChatGPT. But your threat modeling process is still stuck in workshop mode: slow, manual, and...
0
0
7
Factory Recap: Securing AI Agents in Production - https://t.co/khr4XGiQv7 - @GoogleCloudTech
#AISecurity #AIAgents #PromptInjection #ContextPoisoning #VectorDBAttack #Sandbox #ModelArmor #RAGLeaks #SupplyChain #EUAIAct
cloud.google.com
Securing AI agents in production is crucial. Learn about current threats, layered defense strategies, and practical implementations to keep your AI agents and users safe from prompt injection,...
0
1
5
Blog Series: Securing the Future: Protecting AI Workloads in the Enterprise - https://t.co/vzqerjzjFw - @Azure
#AI #AISecurity #AISupplyChain #DataPoisoning #ModelBackdoors #MLOps #NIST #MITRE #CloudSecurity #EnterpriseAI
techcommunity.microsoft.com
Post 1: The Hidden Threats in the AI Supply Chain Your AI Supply Chain Is Under Attack — And You Might Not Even Know It Imagine deploying a cutting-edge AI...
1
2
10
The Highs and Lows of Vibe Coding - https://t.co/8NVAPlXlUh - @snyksec The vibe coding revolution has created billion-dollar companies in months and democratized software creation for millions, while simultaneously introducing catastrophic security vulnerabilities and
snyk.io
"Vibe coding" with AI builds billion-dollar startups fast, but it also creates massive security risks. With 40% of AI code vulnerable and major data leaks emerging, explore the highs and lows of this...
0
0
4
Hugging Face and VirusTotal collaborate to strengthen AI security - https://t.co/ICxMhv0CkQ - @huggingface @bquintero @XciD_ Starting today, every one of the 2.2M+ public model and datasets repositories on the Hugging Face Hub is being continuously scanned with VirusTotal. Why
huggingface.co
0
0
7
Interpreting Jailbreaks and Prompt Injections with Attribution Graphs - https://t.co/31fulJ9D4C by @zenitysec Today’s agent security is strong at the edges: we monitor inputs/outputs, trace and permission tool calls, track taint, rate-limit, and log everything. We have a very
labs.zenity.io
0
1
8
Leveraging Machine Learning to Enhance Acoustic Eavesdropping Attacks (Blog Series) - https://t.co/fY6ESiEqBq In this post, we discussed the various steps of data preprocessing we took to prepare the accelerometer and gyroscope readings for AI training. We also discussed the
0
0
5
1/ NEW: We propose a new black-box attack on LLMs that needs only text (no logits, no extra models). It's generic: we can craft adversarial examples, prompt injections, and jailbreaks using the model itself👇 How? Just ask the model for optimization advice! 🎯
2
11
56
Metanarrative Prompt Injection - https://t.co/Of8awtC6O3 by @rez0__ When exploiting AI applications, I find myself using this technique really often so I figured I’d write a quick blog about it. I call it the “Metanarrative Prompt Injection.” You might have already used this
josephthacker.com
Metanarrative prompt injections in AI security and its implications.
1
1
8