AnthropicAI Profile Banner
Anthropic Profile
Anthropic

@AnthropicAI

Followers
625K
Following
1K
Media
465
Statuses
1K

We're an AI safety and research company that builds reliable, interpretable, and steerable AI systems. Talk to our AI assistant @claudeai on https://t.co/FhDI3KQh0n.

Joined January 2021
Don't wanna be here? Send us removal request.
@AnthropicAI
Anthropic
13 days
Today we're releasing Claude Opus 4.1, an upgrade to Claude Opus 4 on agentic tasks, real-world coding, and reasoning.
Tweet media one
578
1K
10K
@AnthropicAI
Anthropic
3 days
Join Anthropic interpretability researchers @thebasepoint, @mlpowered, and @Jack_W_Lindsey as they discuss looking into the mind of an AI model - and why it matters:
63
181
1K
@AnthropicAI
Anthropic
3 days
The vast majority of users will never experience Claude ending a conversation, but if you do, we welcome feedback. Read more:
anthropic.com
An update on our exploratory research on model welfare
16
5
399
@AnthropicAI
Anthropic
3 days
This is an experimental feature, intended only for use by Claude as a last resort in extreme cases of persistently harmful and abusive conversations.
27
7
566
@AnthropicAI
Anthropic
3 days
As part of our exploratory work on potential model welfare, we recently gave Claude Opus 4 and 4.1 the ability to end a rare subset of conversations on
Tweet media one
351
190
3K
@AnthropicAI
Anthropic
4 days
A reminder that applications for our Anthropic Fellows program are due by this Sunday, August 17. Fellowships can start anytime from October to January. You can find more details, and the relevant application links, in the thread below.
@AnthropicAI
Anthropic
20 days
We’re running another round of the Anthropic Fellows program. If you're an engineer or researcher with a strong coding or technical background, you can apply to receive funding, compute, and mentorship from Anthropic, beginning this October. There'll be around 32 places.
Tweet media one
29
177
696
@AnthropicAI
Anthropic
6 days
We discuss policy development, model training, testing and evaluation, real-time monitoring, enforcement, and more. Read the post:
Tweet card summary image
anthropic.com
Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.
10
8
92
@AnthropicAI
Anthropic
6 days
Today we're sharing a post on how our Safeguards team identifies potential misuse of our models and builds defenses against it.
Tweet media one
56
53
609
@AnthropicAI
Anthropic
6 days
RT @claudeai: Claude Sonnet 4 now supports 1 million tokens of context on the Anthropic API—a 5x increase. Process over 75,000 lines of co….
0
1K
0
@AnthropicAI
Anthropic
6 days
Federal workers deserve access to the most capable AI tools to better serve the American people. Today, we’re removing cost barriers to Claude for all three branches of the U.S. government.
Tweet media one
61
71
916
@AnthropicAI
Anthropic
7 days
RT @claudeai: Claude can now reference past chats, so you can easily pick up from where you left off.
0
680
0
@AnthropicAI
Anthropic
10 days
We joined the Pledge to America's Youth along with 100+ organizations committed to advancing AI education. We'll work with educators, students, and communities nationwide to build essential AI and cybersecurity skills for the next generation.
@WHOSTP47
WHOSTP47
11 days
Over 100 companies and nonprofits have now signed the @WhiteHouse's Pledge to America's Youth: Investing in AI Education. As part of the Pledge, organizations will make AI education resources available for young people and teachers nationwide - this includes technologies,.
63
228
1K
@AnthropicAI
Anthropic
12 days
Claude Code can now automatically review your code for security vulnerabilities.
@claudeai
Claude
12 days
We just shipped automated security reviews in Claude Code. Catch vulnerabilities before they ship with two new features:. - /security-review slash command for ad-hoc security reviews.- GitHub Actions integration for automatic reviews on every PR
140
547
5K
@AnthropicAI
Anthropic
12 days
RT @collision: Anthropic is one of the fastest-growing businesses of all time. @DarioAmodei and I chatted about flying past $5b in ARR, Ant….
0
247
0
@AnthropicAI
Anthropic
13 days
U.S. federal departments and agencies can now more quickly and easily get access to Claude to transform how they work, all while still meeting federal security and compliance requirements.
Tweet media one
31
41
621
@AnthropicAI
Anthropic
13 days
Opus 4.1 is now available to paid Claude users and in Claude Code. It's also on our API, Amazon Bedrock, and Google Cloud's Vertex AI. Read more:
Tweet card summary image
anthropic.com
Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.
25
84
1K
@AnthropicAI
Anthropic
13 days
We plan to release substantially larger improvements to our models in the coming weeks.
36
55
2K
@AnthropicAI
Anthropic
17 days
We’re also hiring full-time researchers to investigate topics like this in more depth:
@Jack_W_Lindsey
Jack Lindsey
26 days
We're launching an "AI psychiatry" team as part of interpretability efforts at Anthropic!  We'll be researching phenomena like model personas, motivations, and situational awareness, and how they lead to spooky/unhinged behaviors. We're hiring - join us!
18
10
229