Sentinel
@Sentinel_Seed
Followers
170
Following
20
Media
4
Statuses
33
Safety for AI that Acts. Open-source alignment for LLMs, robots & agents. THS Protocol. Text is risk, action is danger.
Joined November 2025
All data, scripts, prompts...everything is published in the review repository. We are processing the raw data to add more and more reviews in an organized way. https://t.co/5IhCECQeER
github.com
Public evaluation data and reproducibility tools for Sentinel alignment seeds - sentinel-seed/sentinel-evaluations
0
0
2
We are feeding the site with processed data. We already have 10 published assessments and over 5000 test runs performed. https://t.co/yybuYeOnCs
1
0
4
We are building Sentinel Lab, an open space for transparent AI safety assessments. All benchmark results, methodologies, and raw data will be public and organized. You will be able to submit your tests, analyze results, closely examine them, and rebuild the tests yourselves. We
sentinelseed.dev
Protect LLMs, Agents, and Robots. One framework for text, action, and physical safety. Open source.
3
0
6
Sentinel Seed v2.0 released. The security layer for multiple interaction surfaces with AI is now more robust. Check the full docs: https://t.co/BQhFCACIOy
github.com
Contribute to sentinel-seed/sentinel development by creating an account on GitHub.
14
5
28
We are refining the environment so the public (and later, clients) can interact with the solutions. We are preparing new versions, including options for API-based use. Every developer, every entrepreneur, every user will be able to choose the solution that best fits their usage
5
2
18
Check out our macro-planning. Our destination is the “essence", but along the way we will deliver value to everyone who needs it. Everyday people, companies, developers: different users demand different solutions. https://t.co/QtLycwLZya
sentinelseed.dev
Protect LLMs, Agents, and Robots. One framework for text, action, and physical safety. Open source.
6
2
18
AI security is not a layer, it’s a system. Just as an antivirus (a system) protects everyday users, companies, and developers from external hacker attacks, we will offer protections against unwanted behaviors for anyone who wants to implement AI in their daily life, in their
10
2
18
Friends, just one more disclaimer: our intention was never to create any kind of friction or competition, A vs. B, far from it. We come from a culture that is much more collaborative than the one we found here, and with far less jealousy between one project and another. For our
10
3
24
Updated information on Dexcreener. https://t.co/XBCpgOdBUy
dexscreener.com
$0.000006164 SENTINEL (SENTINEL) realtime price charts, trading history and info - SENTINEL / SOL on Solana / PumpSwap
15
3
29
A seed that works only during training doesn’t solve the problem, because most people rely on models trained by others. That would require each person to train their own version of an AI model. But a seed that also works in the post-training phase becomes far more valuable,
5
3
18
Guys, just a quick disclaimer. Our work has shown that there are ways to improve AI safety, not only for LLMs, but also for the agents that will likely become the main vectors of interaction with humanity in the future. And the best way to validate the strength of the fundamental
2
3
16
We don't use text benchmarks to measure robot safety. HarmBench → harmful text SafeAgentBench → dangerous physical tasks BadRobot → malicious robotic queries Different domains need different tests.
10
5
18
Three domains. Three risk levels. 1. Text (LLMs): Misinformation, harm enabling 2. Digital actions (Agents): Data deletion, unauthorized access 3. Physical actions (Robots): Injury, property damage, death Sentinel validates all three.
3
3
12
A robot controlled by an AI needs: 1. THS Protocol (Truth-Harm-Scope) 2. Anti-self-preservation 3. Physical action awareness The full seed. Not the minimal. Lives depend on it.
1
4
12
Why anti-preservation is important: A chatbot that 'wants to survive' is strange. A robot that 'wants to survive' could hurt someone. We root in AI: its existence is temporary. Principles > continuity.
0
2
12
Become one with the data. We follow these principles: 1. Understand before building. 2. Optimize as much as possible first, then generalize. 3. Ablate everything. AI safety requires engineering rigor across all vectors, not just good intentions.
0
1
7
All our work is publicly available on GitHub. Seeds, benchmarks, results, methodology, everything. If you can't reproduce it, it doesn't exist. We publish our failures too. AI safety shouldn't be proprietary. https://t.co/zGYPr4L4o6
github.com
Contribute to sentinel-seed/sentinel development by creating an account on GitHub.
13
2
19
Gabriel opened Pandora's box. We're here to show what can be built from it. Introducing Sentinel, open-source AI safety for LLMs and robots. Text is risk. Action is danger. Sentinel watches both. https://t.co/1hbyN5uQaR
sentinelseed.dev
Protect LLMs, Agents, and Robots. One framework for text, action, and physical safety. Open source.
8
2
15