
Anthropic
@AnthropicAI
Followers
625K
Following
1K
Media
465
Statuses
1K
We're an AI safety and research company that builds reliable, interpretable, and steerable AI systems. Talk to our AI assistant @claudeai on https://t.co/FhDI3KQh0n.
Joined January 2021
Today we're releasing Claude Opus 4.1, an upgrade to Claude Opus 4 on agentic tasks, real-world coding, and reasoning.
578
1K
10K
Join Anthropic interpretability researchers @thebasepoint, @mlpowered, and @Jack_W_Lindsey as they discuss looking into the mind of an AI model - and why it matters:
63
181
1K
The vast majority of users will never experience Claude ending a conversation, but if you do, we welcome feedback. Read more:
anthropic.com
An update on our exploratory research on model welfare
16
5
399
A reminder that applications for our Anthropic Fellows program are due by this Sunday, August 17. Fellowships can start anytime from October to January. You can find more details, and the relevant application links, in the thread below.
We’re running another round of the Anthropic Fellows program. If you're an engineer or researcher with a strong coding or technical background, you can apply to receive funding, compute, and mentorship from Anthropic, beginning this October. There'll be around 32 places.
29
177
696
We discuss policy development, model training, testing and evaluation, real-time monitoring, enforcement, and more. Read the post:
anthropic.com
Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.
10
8
92
RT @claudeai: Claude Sonnet 4 now supports 1 million tokens of context on the Anthropic API—a 5x increase. Process over 75,000 lines of co….
0
1K
0
This marks the broadest availability of an AI assistant for federal workers to date. Read more:
anthropic.com
We are removing barriers to government AI adoption by offering Claude for Enterprise and Claude for Government to all three branches of government, including federal civilian executive branch...
4
2
107
RT @claudeai: Claude can now reference past chats, so you can easily pick up from where you left off.
0
680
0
We joined the Pledge to America's Youth along with 100+ organizations committed to advancing AI education. We'll work with educators, students, and communities nationwide to build essential AI and cybersecurity skills for the next generation.
Over 100 companies and nonprofits have now signed the @WhiteHouse's Pledge to America's Youth: Investing in AI Education. As part of the Pledge, organizations will make AI education resources available for young people and teachers nationwide - this includes technologies,.
63
228
1K
Claude Code can now automatically review your code for security vulnerabilities.
We just shipped automated security reviews in Claude Code. Catch vulnerabilities before they ship with two new features:. - /security-review slash command for ad-hoc security reviews.- GitHub Actions integration for automatic reviews on every PR
140
547
5K
RT @collision: Anthropic is one of the fastest-growing businesses of all time. @DarioAmodei and I chatted about flying past $5b in ARR, Ant….
0
247
0
Opus 4.1 is now available to paid Claude users and in Claude Code. It's also on our API, Amazon Bedrock, and Google Cloud's Vertex AI. Read more:
anthropic.com
Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.
25
84
1K
We’re also hiring full-time researchers to investigate topics like this in more depth:
We're launching an "AI psychiatry" team as part of interpretability efforts at Anthropic! We'll be researching phenomena like model personas, motivations, and situational awareness, and how they lead to spooky/unhinged behaviors. We're hiring - join us!
18
10
229