
Anthropic
@AnthropicAI
Followers
627K
Following
1K
Media
469
Statuses
1K
We're an AI safety and research company that builds reliable, interpretable, and steerable AI systems. Talk to our AI assistant @claudeai on https://t.co/FhDI3KQh0n.
Joined January 2021
Today we're releasing Claude Opus 4.1, an upgrade to Claude Opus 4 on agentic tasks, real-world coding, and reasoning.
604
1K
10K
If you’re interested in joining us to work on these and related issues, you can apply for our Research Engineer/Scientist role ( on the Alignment Science team.
job-boards.greenhouse.io
San Francisco, CA
8
2
59
We’re also announcing a new Higher Education Advisory Board, which helps guide how Claude is used in teaching, learning, and research. Read more about the courses and the Board:
anthropic.com
Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.
2
21
151
We don't need to choose between innovation and safety. With the right public-private partnerships, we can have both. We’re sharing our approach with @fmf_org members so any AI company can implement similar protections. Read more:
anthropic.com
Together with the NNSA and DOE national laboratories, we have co-developed a classifier—an AI system that automatically categorizes content—that distinguishes between concerning and benign nuclear-...
5
6
62
We partnered with @NNSANews to build first-of-their-kind nuclear weapons safeguards for AI. We've developed a classifier that detects nuclear weapons queries while preserving legitimate uses for students, doctors, and researchers.
117
85
874
RT @claudeai: Claude Code is now available on Team and Enterprise plans. Flexible pricing lets you mix standard and premium Claude Code se….
0
354
0
Join Anthropic interpretability researchers @thebasepoint, @mlpowered, and @Jack_W_Lindsey as they discuss looking into the mind of an AI model - and why it matters:
71
185
1K
The vast majority of users will never experience Claude ending a conversation, but if you do, we welcome feedback. Read more:
anthropic.com
An update on our exploratory research on model welfare
18
7
413
A reminder that applications for our Anthropic Fellows program are due by this Sunday, August 17. Fellowships can start anytime from October to January. You can find more details, and the relevant application links, in the thread below.
We’re running another round of the Anthropic Fellows program. If you're an engineer or researcher with a strong coding or technical background, you can apply to receive funding, compute, and mentorship from Anthropic, beginning this October. There'll be around 32 places.
30
177
692
We discuss policy development, model training, testing and evaluation, real-time monitoring, enforcement, and more. Read the post:
anthropic.com
Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.
10
8
91