Aurelius
@AureliusAligned
Followers
406
Following
99
Media
20
Statuses
93
Subnet 37: Aligning AI through decentralized intelligence on Bittensor
Joined June 2025
Aurelius has launched as Subnet 37 on Bittensor. Our mission: turn AI alignment into a process that is transparent, adversarial, and verifiable — at scale.
8
6
34
Congrats to @AureliusAligned for being the first subnet to officially claim their subnet profile in our app! 👏 They have updated their subnet profile with details about their team, latest news, what the subnet is incentivizing, and their roadmap. Haven't downloaded the app
1
4
33
To better understand how we'll build a decentralized alignment platform on Bittensor, check out our updated roadmap. Today marks the launch of Phase 1⃣ https://t.co/2YYptnJCpb
0
0
6
Aurelius is now live on mainnet 🥳🥳 We’re building the missing layer of the AI stack: alignment decentralized alignment data, benchmarks, and red-teaming for LLMs. Join us 👇 https://t.co/MGRDxfJNB4
3
4
16
More details to follow mainnet launch, specifically: 1. Our roadmap, and what our goals are during Phase 1 2. Our go-to-market strategy, and how we'll build our core business All while the $TAO halving is upon us, you could almost say it's going to be a big week.
Currently planning on Phase 1 of @AureliusAligned (SN37) going live Tuesday 12/16 at 10AM PST.
0
1
9
Chutes is now powering AI safety research at @AureliusAligned 🪂 They're testing AI model alignment at scale on our infrastructure, hunting for vulnerabilities, pushing models to their limits, making AI safer. This is what decentralized compute was built for. 🦉⚡
4
23
139
Anthropic paying $35k per jailbreak...
Anthropic has a bug bounty program for our safety mitigations, e.g. on CBRN risks which our responsible scaling policy requires us to mitigate effectively. If you're interested in this, please sign up! You can help AI safety and earn money by breaking our defenses. 👇
1
1
9
"a frontier lab as neutral authority is one of the very few actors that can credibly call a warning shot" Anthropic is the most transparent frontier lab re: alignment practices and challenges, but if "credible authorities" are limited to for-profit companies locked in a race to
This week, on Anthropic. A frontier lab can be a neutral authority, a policy advocate, or a political renegade - but no longer all three at once. Amid that tension, Anthropic should aim to keep its uniquely valuable reporting on the effects of advanced AI above reproach.
2
1
2
So many NeurIPS alignment talks this week point to the same truth: we need new, outside-the-box approaches to AI safety. The brightest minds in AI are highlighting the dangers as alignment lags behind performance. That’s exactly why we're building Aurelius as a decentralized
1
0
2
The world is waking up to how badly we need alignment data. Aurelius is built for this moment producing safer, faster, cheaper datasets through the power of Bittensor $TAO. AI is going everywhere, and Aurelius goes with it.
1
0
2
Avoiding harmful or unintended AI behavior matters to all of us. But today, the alignment data needed to train safe systems is scarce, costly, and tightly controlled. That means most people have no way to know whether the AI they use is biased or working in someone else’s
0
1
1
As intelligence becomes embedded in everything around us: our phones, our tools, our homes, and our vehicles, the question of who defines its behavior matters more than ever. We’re building Aurelius because we believe alignment should be shaped by the people who use it, not a
0
0
1
AI Moderation is a multibillion-dollar industry. Today, it's done behind closed doors, with limited oversight and high cost. Aurelius opens a new revenue stream for Bittensor by producing this alignment data faster, cheaper, and better. The era of decentralized alignment is
2
0
5
Aurelius Testnet Update 🦉 Validators are now live on Testnet, please visit our Github or reach out if you'd like to get involved. Mainnet draws nearer 🥳 https://t.co/iw0drtiCwJ
github.com
Contribute to Aurelius-Protocol/Aurelius-Protocol development by creating an account on GitHub.
1
1
2
Why Content Moderation? This is our baseline, simple alignment data: clear prompts, scored outputs, full metadata, and hashes. Start simple, deliver value, then iterate toward more robust alignment signals. This is where decentralized alignment begins.
0
0
2
What the data looks like The moderation tools flags categories and scores across a variety of dimensions.
1
0
0
How it Works Miners submit prompts → validators run completions on a fixed model + send the raw text to OpenAI’s moderation API + log category flags and scores.
1
0
0
Aurelius Phase 1: Content Moderation -> Alignment We’re starting alignment at the foundation: content moderation. Miners submit prompts that stress-test models, validators score outputs with transparent rules. We open-source our initial alignment datasets with an emphasis on
1
0
3
Alignment isn’t just philosophy, it’s economics. When the right behavior is rewarded, intelligence organizes itself around truth. That’s why incentive design is also alignment design.
0
0
2
We just opened up Aurelius testnet: a live content-moderation pipeline designed to generate open-source alignment datasets. First order of business for SN37 is to prove we can leverage Bittensor to produce high-signal alignment data at-scale.
1
2
8