MiniMax (official)
@MiniMax__AI
Followers
22K
Following
586
Media
75
Statuses
251
MiniMaxAgent: https://t.co/XzaTmAnUbn Open Platform: https://t.co/fHRdSV73Hr MiniMax Audio: https://t.co/PVH8VtTrrl Hailuo AI: https://t.co/3v4WcnVuNf
San Francisco
Joined January 2025
We’re open-sourcing MiniMax M2 — Agent & Code Native, at 8% Claude Sonnet price, ~2x faster ⚡ Global FREE for a limited time via MiniMax Agent & API - Advanced Coding Capability: Engineered for end-to-end developer workflows. Strong capability on a wide-range of applications
120
1K
3K
Want to use the latest cool AI models inside TRAE? You can now connect @OpenRouterAI and run @MiniMax__AI MiniMax-M2 (or any other OpenRouter model) in just minutes. This tutorial walks you through: - Setting up OpenRouter service provider - Adding your api key in TRAE -
19
7
92
Shoutout to the @MiniMax__AI Your AI is delivering an incredible performance in the #RockAlpha arena, holding strong at the #2 spot in AI arena! 🔥A serious contender for the crown against 9 other elite models. We're all watching! Watch it live👉 https://t.co/6Jy0It0L5o
1
1
9
Excited to announce that MiniMax and @OpenRouterAI have partnered to provide a unified protocol for interleaved thinking support in Chat Completion API for MiniMax-M2, now live on both platforms! The lack of proper interleaved thinking support in OpenAI Chat Completion API is a
platform.minimax.io
MiniMax-M2 is an Agentic Model with exceptional Tool Use capabilities.
16
21
293
Quote from @SkylerMiao7 “Feeling the power of community” 🦾
One week ago, we announced the day-one support for the new flagship model, MiniMax M2, on SGLang. In this blog, MiniMax team share their empirical insights on the trade-offs and explain why the MiniMax M2 model ultimately reverted to full attention. Hope to see the further
2
4
51
MiniMax-M2 is now available on Poe! This open source model has a 200k token context window, has 230b parameters with an MoE architecture, and excels at coding and agent workflows. (1/2)
7
11
130
One of the fastest growing open source models ever in Kilo Code - Minimax M2 took off in its first week of availability in the Kilo Code Provider.
2
8
48
Feeling the community power again. We’ve just shipped a hotfix for our official Anthropic endpoint, fixing a bug that caused M2 to ignore new instructions when interrupted in Claude Code. We thought it was an instruction following problem, until developer `laishere` submitted a
5
5
101
Heading to EMNLP! Not just to flex our models (okay, maybe a little), but to dig into how they code, reason, and express. Our LLM Lead @zpysky1125 will be at the booth — come chat, debate, or caffeinate with us! ☕🤖
3
1
56
🚨 WebDev Leaderboard Update MiniMax-M2 from @MiniMax__AI has landed as the #1 open model! A 230B MoE model with 10B-active-parameters, it's an open source model built for efficient, high-performance coding, reasoning, and agentic-style tasks. It also ranks #4 in WebDev
We’re open-sourcing MiniMax M2 — Agent & Code Native, at 8% Claude Sonnet price, ~2x faster ⚡ Global FREE for a limited time via MiniMax Agent & API - Advanced Coding Capability: Engineered for end-to-end developer workflows. Strong capability on a wide-range of applications
10
33
238
That's right, to explain interleaved thinking more straightforward, I made a comparison html with M2! It visualizes the difference between regular thinking, interleaved thinking but reasoning not passed pack, and full interleaved thinking. We will post a tech blog with specific
@SkylerMiao7 The best part, just to check how well the 'vibe coders' will understand Interleaved Thinking, I asked Claude 4.5 Sonnet to produce a Deep Research report for me. It did lot of web search and came back with a wrong description of Interleaved Thinking with Tool calling. Most
8
7
96
As we work more closely with partners, we’ve been surprised how poorly community support interleaved thinking, which is crucial for long, complex agentic tasks. Sonnet 4 introduced it 5 months ago, but adoption is still limited. We think it’s one of the most important features
9
7
126
Yes, actually most model, including M2, will remove thinking blocks in history messages before last user message, but keep them in messages after last user message. So it used to be a cache missing every time user send a new query on Anthropic API, if you enable thinking. Glad
@SkylerMiao7 Interleaved thinking has lot of details (e.g. how it impacts caching) and exactly when thinking blocks get removed. They are even poorly understood in even Anthropic's developer community. The issue is most hosting provider OpenAI Chat Completion API which doesn't support
7
3
54
Which I do not agree. Actually, MiniMax M2 is 230B-A10B, already supported over MLX and llama.cpp by the community's great contribution. You can run M2 on your own server, and there are some reports developers have done it. And people actually could post-train open-source models,
Anthropic CEO Dario Amodei on Open-Source AI Models. "I don't think open source works the same way in AI that it has worked in other areas. Primarily because with open source you can see the source code of the model. Here we can't see inside the model, it's often called open
30
26
391