
Cerebras
@CerebrasSystems
Followers
35K
Following
2K
Media
1K
Statuses
2K
The world's fastest AI inference and training. Try the latest open models at: https://t.co/jREGhLI2nj
Sunnyvale, CA
Joined July 2016
đ§ Cerebras Inference self-serve is finally here đ§ â Pay by credit card starting at $10 â Run Qwen3 Coder, GPT OSS & more at 2,000+ TPS â 20x the speed of GPU-based model providers Go ahead. Melt our wafers. https://t.co/YfjxBGMl8i
37
37
464
Try it on Fast Context playground https://t.co/MiLavKxKUf
playground.cognition.ai
Check out the Fast Context agent
0
0
3
SWE-grep is truly some inspired ML from @cognition. Take a 1M line codebase like React. With multiple fast inference calls on Cerebras, it can fetch & explain relevant code in seconds. Here are a few measurements taken on React, Vercel, PyTorch repos.
7
8
138
No tricks, just treats from our ML team. đť
(1/5) Compressing a 1T parameter MoE model by 50% with virtually no loss in quality is now possible. Our new paper with @mikelasby @lzrvch @NishSinnadurai Sean Lie and @yanii introduces an expert pruning criterion for one-shot compression that aims to solve the memory
2
6
51
Cerebras is now powering Cognition's latest code retrieval models directly in @windsurf Context retrieval has been one of the biggest bottlenecks in agentic coding. When you ask an agent to work on a large codebase, it can spend 60% of its time just searching for relevant files.
Introducing SWE-grep and SWE-grep-mini: Cognitionâs model family for fast agentic search at >2,800 TPS. Surface the right files to your coding agent 20x faster. Now rolling out gradually to Windsurf users via the Fast Context subagent â or try it in our new playground!
19
29
528
it's your classic late night coffeeshop, but the coffee bar's free and we hired jump scare actors but let's be real. nothing's scarier than NVIDIA's slow inference đť Presented by @CerebrasSystems @cognition @BainCapVC @arena
1
1
15
Wanna hear a cool story? Our Oklahoma City datacenter was designed around water, not air. To keep our wafer-scale systems running at peak performance, we use a Tier III, 6,000-ton chilled-water plant inside a 100,000 sq ft F5-rated facility. Unlike traditional air-cooled
8
8
194
At 7, I won scariest halloween costume. So naturally, weâre going all out for the next Cafe Compute â jump actors, face painters, trick-or-treating, and a live DJ SFâs first late-night coffee shop for engineers, founders & writers. Join @CerebrasSystems, @BainCapVC & @arena on
2
1
11
Join Neha Gupta, Chief Financial Officer at Core42, and Julie Choi, Chief Marketing Officer at @CerebrasSystems, for an exclusive fireside chat exploring how Core42 Compass, accelerated by Cerebras, is redefining real-time AI for enterprise use cases from accelerating OSS model
0
2
12
Not sure what events to go to for AI Tech week? Our intern @KevinTaylor00 used @browser_use to scrape all of the events in record time. With Cerebras, the agents can now take 30+ steps per minute. âĄď¸Go give it a try. It's super fast âĄď¸
4
6
35
Featuring an exclusive fireside chat with @andrewdfeldman: The Worldâs Fastest AI Inference â Accelerating the Future of Agents and Reasoning
2
3
11
The fastest inference in the world is coming to the worldâs largest technology exhibition. Meet us at @GITEX_GLOBAL. đ Hall 3 | Booth H3-B12 đ
13 â 17 October 2025 đ Dubai World Trade Centre (DWTC)
2
3
31
On Friday, Cerebras withdrew our S-1. It had become stale and no longer reflected the current state of our business. Our business and financial position have evolved significantly for the better since our initial filing in 2024: ⢠In 2024, we achieved record revenues. ⢠In
16
15
335
(1/4) @CerebrasSystems Hot off the presses đĽđ https://t.co/ahPvKCFN9g If you're spending $1B to train an LLM, you need to know itâs on trackâevery step of the way. With optimal AdamW Ď + fixed TPP, loss curves collapse to a universal path â an early-warning signal for training.
2
8
27
Cerebras CEO @andrewdfeldman joined Caroline Hyde and Ed Ludlow on Bloomberg @technology  to discuss how our wafer-scale chips and $1.1B Series G round are reshaping the future of AI infrastructure. đşÂ Watch the full conversation: https://t.co/H2OqClVC3c
bloomberg.com
Cerebras Systems CEO Andrew Feldman discusses how the AI chip maker plans to use the capital from its latest $1.1 billion funding round and what the raise means for the companyâs IPO ambitions. He...
3
3
40