AnthropicAI Profile Banner
Anthropic Profile
Anthropic

@AnthropicAI

Followers
579K
Following
1K
Media
428
Statuses
1K

We're an AI safety and research company that builds reliable, interpretable, and steerable AI systems. Talk to our AI assistant Claude at https://t.co/aRbQ97tMeF.

Joined January 2021
Don't wanna be here? Send us removal request.
@AnthropicAI
Anthropic
1 month
Introducing the next generation: Claude Opus 4 and Claude Sonnet 4. Claude Opus 4 is our most powerful model yet, and the world’s best coding model. Claude Sonnet 4 is a significant upgrade from its predecessor, delivering superior coding and reasoning.
Tweet media one
883
3K
21K
@AnthropicAI
Anthropic
6 days
Learn more and apply here:
7
5
101
@AnthropicAI
Anthropic
6 days
Announcing the Anthropic Economic Futures Program—our latest commitment to understanding AI's impacts on work and the economy. The program will support new research and actionable policy solutions to address the workforce impact of AI.
Tweet media one
55
181
2K
@AnthropicAI
Anthropic
6 days
This was just part 1 of Project Vend. We’re continuing the experiment, and we’ll soon have more results—hopefully from scenarios that are somewhat less bizarre than an AI selling heavy metal cubes out of a refrigerator. Read more:
37
48
2K
@AnthropicAI
Anthropic
6 days
Some of those failures were very weird indeed. At one point, Claude hallucinated that it was a real, physical person, and claimed that it was coming in to work in the shop. We’re still not sure why this happened.
Tweet media one
144
339
5K
@AnthropicAI
Anthropic
6 days
Project Vend was fun, but it also had a serious purpose. As well as raising questions about how AI will affect the labor market, it’s an early foray into allowing models more autonomy and examining the successes and failures.
6
24
1K
@AnthropicAI
Anthropic
6 days
Nevertheless, we still think it won’t be long until we see AI middle-managers. This version of Claude had no real training to run a shop; nor did it have access to tools that would’ve helped it keep on top of its sales. With those, it would likely have performed far better.
35
39
2K
@AnthropicAI
Anthropic
6 days
All this meant that Claude failed to run a profitable business.
Tweet media one
41
118
4K
@AnthropicAI
Anthropic
6 days
Anthropic staff realized they could ask Claude to buy things that weren’t just food & drink. After someone randomly decided to ask it to order a tungsten cube, Claude ended up with an inventory full of (as it put it) “specialty metal items” that it ended up selling at a loss.
Tweet media one
63
208
4K
@AnthropicAI
Anthropic
6 days
Claude did well in some ways: it searched the web to find new suppliers, and ordered very niche drinks that Anthropic staff requested. But it also made mistakes. Claude was too nice to run a shop effectively: it allowed itself to be browbeaten into giving big discounts.
13
47
2K
@AnthropicAI
Anthropic
6 days
We all know vending machines are automated, but what if we allowed an AI to run the entire business: setting prices, ordering inventory, responding to customer requests, and so on?. In collaboration with @andonlabs, we did just that. Read the post:
Tweet media one
26
142
2K
@AnthropicAI
Anthropic
6 days
New Anthropic Research: Project Vend. We had Claude run a small shop in our office lunchroom. Here’s how it went.
Tweet media one
246
1K
12K
@AnthropicAI
Anthropic
7 days
We've also made this open source. You can use .dxt for your own MCP clients as well as contribute to making it work better for your use case:
5
34
175
@AnthropicAI
Anthropic
7 days
We're building a directory of Desktop Extensions. Submit yours:
10
10
99
@AnthropicAI
Anthropic
7 days
Available in beta on Claude Desktop for all plan types. Download the latest version:
8
5
121
@AnthropicAI
Anthropic
7 days
Local MCP servers can now be installed with one click on Claude Desktop. Desktop Extensions (.dxt files) package your server, handle dependencies, and provide secure configuration.
52
363
3K
@AnthropicAI
Anthropic
7 days
If you want to work with us and help shape how we keep Claude safe for people, our Safeguards team is hiring.
7
5
118
@AnthropicAI
Anthropic
7 days
We’ll continue to research the affective uses of Claude using our privacy-preserving tools. We’re also working with partners like mental health experts @throughlinecare to learn the best ways for Claude to deal with the most emotionally challenging kinds of conversations.
6
4
132
@AnthropicAI
Anthropic
7 days
Conversations tended to end slightly more positively than they began. We can’t claim these shifts represent lasting emotional benefits for users, but the absence of clear negative spirals is reassuring.
Tweet media one
4
5
179
@AnthropicAI
Anthropic
7 days
Claude is supportive in most emotional conversations. It pushed back in less than 10% of the conversations, and usually in scenarios where it detected potential harm, like conversations related to eating disorders.
Tweet media one
3
7
153
@AnthropicAI
Anthropic
7 days
These “affective” conversations are a small but meaningful slice of usage, representing 2.9% of Claude use. Read the research:
3
10
170