
LLM Security
@llm_sec
Followers
10K
Following
652
Media
253
Statuses
830
Research, papers, jobs, and news on large language model security. Got something relevant? DM / tag @llm_sec
🏔️
Joined April 2023
RT @hannahrosekirk: Listen up all talented early-stage researchers! 👂🤖. We're hiring for a 6-month residency in my team at @AISecurityInst….
0
34
0
RT @LeonDerczynski: At ACL in Vienna? Hear the world's leading prompt injector talk at LLMSEC on Friday! . Johann Rehberger @wunderwuzzi23….
0
3
0
RT @LeonDerczynski: Come to LLMSEC at ACL & hear Niloofar's keynote. "What does it mean for agentic AI to preserve privacy?" - @niloofar_mi….
0
3
0
RT @LeonDerczynski: First keynote at LLMSEC 2025, ACL:. "A Bunch of Garbage and Hoping: LLMs, Agentic Security, and Where We Go From Here"….
0
4
0
RT @LeonDerczynski: Call for papers: LLMSEC 2025. Deadline 15 April, held w/ ACL 2025 in Vienna. Formats: long/short/war stories. More: >>….
sig.llmsecurity.net
The first ACL Workshop on LLM and NLP Security; Summer 2025, Vienna, Austria
0
4
0
RT @garak_llm: garak has moved to NVIDIA!. New repo link:
github.com
the LLM vulnerability scanner. Contribute to NVIDIA/garak development by creating an account on GitHub.
0
39
0
author thread for cognitive overload attack:
1. 🔍What do humans and LLMs have in common?. They both struggle with cognitive overload! 🤯 .In our latest study, we dive deep into In-Context Learning (ICL) and uncover surprising parallels between human cognition and LLM behavior. @aminkarbasi @vbehzadan.2. 🧠 Cognitive Load
0
0
3
RT @NannaInie: unpopular opinion: maybe let insecure be insecure and worry about the downstream effects on end users instead of protecting….
0
2
0
RT @_Sizhe_Chen_: Safety comes first to deploying LLMs in applications like agents. For richer opportunities of LLMs, we mitigate prompt in….
0
13
0