JoinTorchbearer Profile Banner
Torchbearer Community Profile
Torchbearer Community

@JoinTorchbearer

Followers
101
Following
175
Media
0
Statuses
121

Humanity has a coordination problem. We have a plan for that.

Global
Joined October 2025
Don't wanna be here? Send us removal request.
@JoinTorchbearer
Torchbearer Community
16 days
🔥 Torchbearer Community (TBC) is live 🔥 We are a group of volunteers addressing the lack of global coordination on humanity’s most urgent challenge: existential risk from advanced AI. TBC was founded by @NPCollapse & @Gabe_cc, who worked for years on technical alignment and
1
6
23
@JoinTorchbearer
Torchbearer Community
15 hours
A deep understanding of AI models is not necessary to have a stake in this. You just need to be a person who doesn't want to live in a world run by systems we don't understand and can't control. Most people feel that way. The question is whether we organize before the
1
2
7
@aisafetyaction
Collective Action for Existential Safety ⏹️
1 day
It is a common belief of some AI developers that they will be able to retain control of the alien intelligences they are growing. This is very unlikely to be the case. It is extreme hubris to claim otherwise, given the evidence and theory to date.
@AnthonyNAguirre
Anthony Aguirre
2 days
Superintelligence, if we develop it using anything like current methods, would not be under meaningful human control. That's the bottom-line of a new study I've put out entitled Control Inversion (link in second post.) Many experts I talk to who take superintelligence (real,
0
2
7
@ai_ctrl
ControlAI
18 hours
Baroness Cass (@Hilary_Cass) just backed our campaign for binding regulation on the most powerful AI systems! 90+ UK politicians have joined, acknowledging the extinction risk from AI. This is the first time that such a coalition of lawmakers is taking a stand on this issue!
2
5
24
@aisafetyaction
Collective Action for Existential Safety ⏹️
1 day
With many more to come, as has been predicted for decades. Our best hope at survival is to pause. Join the 120k others in fighting for a pause: https://t.co/Wi9EJXdQ4i. And take other actions as you can: https://t.co/QeewM92W8t.
existentialsafety.org
Our central aim is to ensure humanity survives this decade. We need to fight for our existential safety. Everyone can and should do their part. Collectively, we can increase the odds that we not only...
@AnthropicAI
Anthropic
2 days
We believe this is the first documented case of a large-scale AI cyberattack executed without substantial human intervention. It has significant implications for cybersecurity in the age of AI agents. Read more:
0
2
7
@janet_e_egan
Janet Egan
1 day
While not surprising, this is seriously worrying. Understanding emerging dual-use AI capabilities should be a priority for the US gov. Now's the time to turbocharge CAISI. "the first documented case of a large-scale cyberattack executed without substantial human intervention."
@AnthropicAI
Anthropic
2 days
We disrupted a highly sophisticated AI-led espionage campaign. The attack targeted large tech companies, financial institutions, chemical manufacturing companies, and government agencies. We assess with high confidence that the threat actor was a Chinese state-sponsored group.
5
10
75
@S_OhEigeartaigh
Seán Ó hÉigeartaigh
1 day
If I were a policymaker right now I would 1) Be asking 'how many months are between Claude Code's capabilities and that of leading open-source models for cyberattack purposes? 2) What are claude code's capabilities (and that of other frontier models) expected to be in 1 year,
5
18
78
@JoinTorchbearer
Torchbearer Community
1 day
Will Anthropic deprecate the dangerous models? We hope so.
@AnthropicAI
Anthropic
2 days
We disrupted a highly sophisticated AI-led espionage campaign. The attack targeted large tech companies, financial institutions, chemical manufacturing companies, and government agencies. We assess with high confidence that the threat actor was a Chinese state-sponsored group.
1
1
8
@FLI_org
Future of Life Institute
2 days
"The painful truth that's really beginning to sink in is that we're much closer to figuring out how to build this stuff than we are to figuring out how to control it." 100,000+ people have signed the Superintelligence Statement, calling to ban the development of superintelligent
20
17
74
@pseudomoaner
Luke McNally
3 days
@DavidSKrueger Altman is acknowledging risks, while also specifying RSI as a goal and even publicy announcing the timeline for their internal automated AI researcher product roadmap. They are definitely still trying to build ASI. Time to double efforts towards regs+moratorium.
0
3
7
@AnthonyNAguirre
Anthony Aguirre
2 days
Superintelligence, if we develop it using anything like current methods, would not be under meaningful human control. That's the bottom-line of a new study I've put out entitled Control Inversion (link in second post.) Many experts I talk to who take superintelligence (real,
15
29
105
@DKokotajlo
Daniel Kokotajlo
2 days
Common ground between the authors of AI 2027 and AI as Normal Technology! Coauthored article below.
23
67
388
@mrgunn
@mrgunn ⏸️
2 days
AGI inevitabiity is a cope. https://t.co/SPFbeuoPxU Join @JoinTorchbearer and let people know we want sensible risk management and transparency in AI development.
0
2
12
@JoinTorchbearer
Torchbearer Community
3 days
0
0
4
@JoinTorchbearer
Torchbearer Community
3 days
Most people worried about AI safety think their options are: Work at an AI lab. Donate to research. Tweet into the void. There's a fourth option: organize locally and make this an issue your state rep can't ignore. We train people to do exactly that. Not theory. Actual
1
1
12
@ai_ctrl
ControlAI
4 days
Less than a week ago, we announced that 85 UK politicians support our campaign for binding regulation on the most powerful AIs. Now it's 90! Lord Goldsmith (@ZacGoldsmith) is the 90th UK politician to back our campaign statement, acknowledging the extinction threat posed by AI.
@ai_ctrl
ControlAI
8 days
Experts continue to warn that superintelligence poses a risk of human extinction. In our newsletter this week, we're providing an important update on the progress of our UK campaign to prevent this threat, along with news on other developments in AI. https://t.co/oCxPJBZtC5
3
7
29
@ai_ctrl
ControlAI
4 days
Microsoft AI CEO Mustafa Suleyman says that smarter-than-human AIs capable of self-improvement, complete autonomy, or independent goal setting would be "very dangerous" and should never be built. He says others in the field "just hope" that such an AI would not harm us.
12
17
67
@NPCollapse
Connor Leahy
4 days
@OlekKier @tszzl Yes, I think choices (such as getting out of self destructive cycles) are hard, scary, and require a lot of work, but they are what we need to make if we want a good future.
1
2
14
@JoinTorchbearer
Torchbearer Community
4 days
The AI debate isn't "optimists vs. pessimists." It's "build it and hope we can control it" vs. "prove it's controllable first." The first group has billions in funding. The second group has common sense. We're organizing the second group.
2
4
16
@kevinlotto
Kevin Lotto
4 days
@JoinTorchbearer Nice to see these topics getting written about in Time magazine. More and more people are becoming aware of the risks and are showing their concern. Humans decide whether we build superintelligence, or not. It is NOT inevitable.
0
1
7