
chatguard
@LLMChatguard
Followers
21
Following
10
Media
6
Statuses
48
https://t.co/z7F1hYMqhV | Defend Against Prompt Injection Attacks
Joined September 2023
Waiting to be built. ?. It's already here:.
chatguard.ai
Automate Attack Simulation to Create Actionable Datasets to Secure LLMs; LLM Security: Large Language Model Security; AI Security; Data Security; Attack Simulation; Jailbreaking; Red-Teaming;...
If prompt injection is fundamentally insolvable, as I suspect it is, then there is a sizeable security company waiting to be built just around mitigating this issue.
0
2
2
This research paper highlights the importance of understanding Prompt Injection Risks.
Assessing Prompt Injection Risks in 200+ Custom GPTs . "Through prompt injection, an adversary can not only extract the customized system prompts but also access the uploaded files.". #privacy #security #GenerativeAI #ArtificialIntelligence.
0
0
0
RT @bobehayes: Assessing Prompt Injection Risks in 200+ Custom GPTs . "Through prompt injection, an adversary can not only extract the custβ¦.
ar5iv.labs.arxiv.org
In the rapidly evolving landscape of artificial intelligence, ChatGPT has been widely used in various applications. The new feature β customization of ChatGPT models by users to cater to specific...
0
1
0
This is why we built ChatGuard π¦. Find out more below, or send us a DM for more insights on the world of AI security π.
chatguard.ai
Automate Attack Simulation to Create Actionable Datasets to Secure LLMs; LLM Security: Large Language Model Security; AI Security; Data Security; Attack Simulation; Jailbreaking; Red-Teaming;...
0
0
0
Our co-founder @xingxinyu and his team of researchers at Northwestern University recently assessed the security of over 200 custom GPT models. The results were damning:. They jailbroke 97.2% of them.
1
0
0
The current AI debate is too narrow. "Are AI models becoming too powerful?". "Are they biased? How can we be sure?". "Are they abusing data privacy?". A much more fundamental question is missing. πΌπ§π πΌπ π’π€πππ‘π¨ ππ«ππ£ π¨πππͺπ§π ππ£ π©ππ πππ§π¨π© π₯π‘πππ?.
1
0
0
1/ Large Language Models are continuing to revolutionize all sectors of business and life with their AI power, but new research from Saarland University and CISPA flags some major security risks when integrating #LLM's into various Applications.
1
0
1
RT @xingxinyu: Happy to see a comprehensive re-evaluation to ML-based fuzz, particularly when there is a broad discussion on using AI for vβ¦.
0
2
0
RT @LLMChatguard: Waiting to be built. ?. It's already here:.
chatguard.ai
Automate Attack Simulation to Create Actionable Datasets to Secure LLMs; LLM Security: Large Language Model Security; AI Security; Data Security; Attack Simulation; Jailbreaking; Red-Teaming;...
0
2
0
4/ This ongoing threat can be seen from top creators having their source code and prompts repeatedly being exported.
wtf. Reverse engineering Grimoireβs prompt to learn how it works and make your version is one thing. But.-republishing my code w/o consent.-copy pasting it directly.-to market your tool.-to steal my product.-in order to avoid paying OpenAI (& me with revenue share). Is fucking
1
0
0
3/ Chief Scientist of ChatGuard @xingxinyu led a team testing over 200 custom ChatGPT models, concluded that most had their system prompts and training files exposed! This is a massive security concern.
1
0
0