Torchbearer Community
@JoinTorchbearer
Followers
101
Following
175
Media
0
Statuses
121
Humanity has a coordination problem. We have a plan for that.
Global
Joined October 2025
🔥 Torchbearer Community (TBC) is live 🔥 We are a group of volunteers addressing the lack of global coordination on humanity’s most urgent challenge: existential risk from advanced AI. TBC was founded by @NPCollapse & @Gabe_cc, who worked for years on technical alignment and
1
6
23
Have 3-8 hours per week to volunteer: https://t.co/PMiSgsnvgw Have 5-10 MINUTES per week to volunteer:
microcommit.io
MicroCommit is a platform that lets organisations send weekly requests to their members. More importantly, it lets them easily check that members have acknowledged the requests (whether it is by...
0
0
1
A deep understanding of AI models is not necessary to have a stake in this. You just need to be a person who doesn't want to live in a world run by systems we don't understand and can't control. Most people feel that way. The question is whether we organize before the
1
2
7
It is a common belief of some AI developers that they will be able to retain control of the alien intelligences they are growing. This is very unlikely to be the case. It is extreme hubris to claim otherwise, given the evidence and theory to date.
Superintelligence, if we develop it using anything like current methods, would not be under meaningful human control. That's the bottom-line of a new study I've put out entitled Control Inversion (link in second post.) Many experts I talk to who take superintelligence (real,
0
2
7
Baroness Cass (@Hilary_Cass) just backed our campaign for binding regulation on the most powerful AI systems! 90+ UK politicians have joined, acknowledging the extinction risk from AI. This is the first time that such a coalition of lawmakers is taking a stand on this issue!
2
5
24
With many more to come, as has been predicted for decades. Our best hope at survival is to pause. Join the 120k others in fighting for a pause: https://t.co/Wi9EJXdQ4i. And take other actions as you can: https://t.co/QeewM92W8t.
existentialsafety.org
Our central aim is to ensure humanity survives this decade. We need to fight for our existential safety. Everyone can and should do their part. Collectively, we can increase the odds that we not only...
We believe this is the first documented case of a large-scale AI cyberattack executed without substantial human intervention. It has significant implications for cybersecurity in the age of AI agents. Read more:
0
2
7
While not surprising, this is seriously worrying. Understanding emerging dual-use AI capabilities should be a priority for the US gov. Now's the time to turbocharge CAISI. "the first documented case of a large-scale cyberattack executed without substantial human intervention."
We disrupted a highly sophisticated AI-led espionage campaign. The attack targeted large tech companies, financial institutions, chemical manufacturing companies, and government agencies. We assess with high confidence that the threat actor was a Chinese state-sponsored group.
5
10
75
If I were a policymaker right now I would 1) Be asking 'how many months are between Claude Code's capabilities and that of leading open-source models for cyberattack purposes? 2) What are claude code's capabilities (and that of other frontier models) expected to be in 1 year,
5
18
78
Will Anthropic deprecate the dangerous models? We hope so.
We disrupted a highly sophisticated AI-led espionage campaign. The attack targeted large tech companies, financial institutions, chemical manufacturing companies, and government agencies. We assess with high confidence that the threat actor was a Chinese state-sponsored group.
1
1
8
"The painful truth that's really beginning to sink in is that we're much closer to figuring out how to build this stuff than we are to figuring out how to control it." 100,000+ people have signed the Superintelligence Statement, calling to ban the development of superintelligent
20
17
74
@DavidSKrueger Altman is acknowledging risks, while also specifying RSI as a goal and even publicy announcing the timeline for their internal automated AI researcher product roadmap. They are definitely still trying to build ASI. Time to double efforts towards regs+moratorium.
0
3
7
Superintelligence, if we develop it using anything like current methods, would not be under meaningful human control. That's the bottom-line of a new study I've put out entitled Control Inversion (link in second post.) Many experts I talk to who take superintelligence (real,
15
29
105
Common ground between the authors of AI 2027 and AI as Normal Technology! Coauthored article below.
23
67
388
AGI inevitabiity is a cope. https://t.co/SPFbeuoPxU Join @JoinTorchbearer and let people know we want sensible risk management and transparency in AI development.
0
2
12
Most people worried about AI safety think their options are: Work at an AI lab. Donate to research. Tweet into the void. There's a fourth option: organize locally and make this an issue your state rep can't ignore. We train people to do exactly that. Not theory. Actual
1
1
12
Less than a week ago, we announced that 85 UK politicians support our campaign for binding regulation on the most powerful AIs. Now it's 90! Lord Goldsmith (@ZacGoldsmith) is the 90th UK politician to back our campaign statement, acknowledging the extinction threat posed by AI.
Experts continue to warn that superintelligence poses a risk of human extinction. In our newsletter this week, we're providing an important update on the progress of our UK campaign to prevent this threat, along with news on other developments in AI. https://t.co/oCxPJBZtC5
3
7
29
Microsoft AI CEO Mustafa Suleyman says that smarter-than-human AIs capable of self-improvement, complete autonomy, or independent goal setting would be "very dangerous" and should never be built. He says others in the field "just hope" that such an AI would not harm us.
12
17
67
The AI debate isn't "optimists vs. pessimists." It's "build it and hope we can control it" vs. "prove it's controllable first." The first group has billions in funding. The second group has common sense. We're organizing the second group.
2
4
16
@JoinTorchbearer Nice to see these topics getting written about in Time magazine. More and more people are becoming aware of the risks and are showing their concern. Humans decide whether we build superintelligence, or not. It is NOT inevitable.
0
1
7