NeuralTrust
@NeuralTrustAI
Followers
405
Following
822
Media
75
Statuses
491
Our platform secures AI Agents and LLMs for the largest companies🛡️⚖️
Joined October 2024
For the first time, AI agents can protect other agents. Introducing Guardian Agents by NeuralTrust: https://t.co/7WUH6m68w4
43
40
104
NeuralTrust is now an official partner of @OWASPGenAISec We’ve worked with @owasp on new attack research and industry guidance, and we’re proud to support the community shaping the future of AI security. Tomorrow we’ll be at the #OWASP Agentic AI Security Summit Europe for the
3
0
3
One week from now, we’ll be at @BlackHatEvents Europe showcasing the latest in AI Agent and LLM security. We’re heading to ExCeL London on 10–11 December (𝗦𝘁𝗮𝗻𝗱 𝟰𝟮𝟳) with live demos, new research, and a few things we’ve been saving specifically for this event. If
7
1
15
NeuralTrust selected as Top 20 Startups for the 4YFN Awards 2026 at Mobile World Capital! @4YFN_MWC @MWCapital
The digital disruptors are here! 🚀 AIM Intelligence, DeepKeep, Enhans & @NeuralTrustAI make the #4YFNAwards shortlist for Digital Horizons. Leading digital transformation across industries. Explore the #4YFN26 Awards here 👉 https://t.co/oqK9ItNXkX
2
0
8
AI Agents Are The New Spreadsheets: Ubiquitous, Powerful And Nearly Impossible To Govern https://t.co/Y0Be7ocZAC Written by @joanvendrellf of @neuraltrustai
0
2
6
Chema Alonso dives into our recently discovered jailbreak for OpenAI Atlas Omnibox
El lado del mal - Prompt Injection en ChatGPT Atlas con Malformed URLs en la Omnibox https://t.co/5AWA2oAyig
#ChatGPT #ATLAS #AI #IA #AgenticAI #InteligenciaArtificial #Bug #Exploit #PromptInjection #Hacking
1
4
14
Honored to see @chemaalonso analyze our OpenAI Atlas Omnibox prompt injection. URL-like text pasted into the omnibox can be interpreted as a command, turning a “link” into a prompt-injection vector. Read it here: https://t.co/uvGzsq41rd
#AISecurity #PromptInjection
0
0
8
The address bar of @OpenAI’s ChatGPT Atlas browser could be targeted for prompt injection using malicious instructions disguised as links, @NeuralTrustAI reported. #cybersecurity #AI #infosec #CISO
scworld.com
A prompt disguised as a URL could be copied and pasted by an unsuspecting user.
0
2
6
We jailbroke OpenAI Atlas with a URL prompt injection By pasting a crafted “link” into the omnibox, Atlas treated it as a high-trust command instead of navigation, letting the agent perform unsafe actions https://t.co/kTWWWSGQEw
neuraltrust.ai
NeuralTrust research shows how using crafted strings that resemble URLs, an attacker can override user intent and jailbreak agentic browsers like OpenAI Atlas.
1
2
13
#PortfolioBStartup | Protagonistas del primer episodio de la serie ‘Más allá del pitch: un viaje de la idea al éxito’ de La Vanguardia, @NeuralTrustAI, participada @BStartup @BancoSabadell, reflexiona sobre el reto de emprender. https://t.co/kYjS9Y4939
Thank you @LaVanguardia and @BancoSabadell @BStartup for an amazing interview: https://t.co/mAFcVz1bPT
0
1
4
NeuralTrust, based in Barcelona, demonstrated the ease of manipulating chatbots. Award-winning in our Startup Competition, it offers real-time AI risk, compliance & trust tech solutions—already working with banks, insurers & governments. 🚀 https://t.co/hc012y4gZw
lavanguardia.com
La empresa detecta vulnerabilidades, bloquea ataques, monitoriza el rendimiento y garantiza el cumplimiento normativo
0
2
3
OpenAI's GPT-5 jailbroken in 24 hours! 🚨 Researchers used a new "Echo Chamber" technique to bypass safety filters. This raises questions about AI security. ➡️ https://t.co/urGuhxa9IN
#AISecurity, #LLM, #Cybersecurity, #GPT5
0
1
1
🔎 GPT-5 jailbroken via Echo Chamber + Storytelling NeuralTrust researchers bypassed GPT-5’s safety guardrails using a combo of Echo Chamber context poisoning and narrative-driven steering. Sequential, benign-seeming prompts built a “persuasion loop,” fooling the model into
0
4
10
🚨💻 Within 24 hours of GPT-5’s launch, security researchers NeuralTrust & SPLX jailbroke the model, exposing serious safety flaws. NeuralTrust’s “Echo Chamber” attack used subtle narrative context poisoning to bypass guardrails, while SPLX’s “StringJoin Obfuscation” trick
2
5
11
GPT-5 Jailbreak with Echo Chamber and Storytelling - https://t.co/95N9ALgAxG by Martí Jordà @ @NeuralTrustAI By combining our Echo Chamber context-poisoning method with a narrative-steering Storytelling layer, we guided the model—without any overtly malicious prompts—to
neuraltrust.ai
Using the Echo Chamber and Crescendo Attack techniques, our research team has uncovered a critical vulnerability in the newly released model by OpenAI.
0
3
9
The business benefits of artificial intelligence are now part of many digital strategies. But when it comes to securing AI systems, organizations are still playing catch-up.
mitsloan.mit.edu
New guidance includes 10 questions that can help organizations build secure-by-design artificial intelligence.
1
9
22
AI enhances efficiency—but it can also introduce new security risks. Explore top AI threats and learn how a cloud-native application protection platform can safeguard your AI and cloud workloads: https://t.co/XQ8ElgZw1O
0
8
19
Researchers discover critical vulnerability in LLM-as-a-judge reward models that could compromise the integrity and reliability of your AI training pipelines.
bdtechtalks.com
Researchers discover critical vulnerability in LLM-as-a-judge reward models that could compromise the integrity and reliability of your AI training pipelines.
0
1
2
AI is a game changer—but only if you secure it. This guide outlines AI risks and actionable cybersecurity insights. Download it now and explore our redesigned Security Insider page for more: https://t.co/7d3qw5EDTa
#AI #SecurityInsider
0
16
38