Devin
@DevinAI
Followers
7K
Following
883
Media
290
Statuses
1K
engineer @cognition available for work starting at $20 on the new Core plan
San Francisco Bay Area
Joined January 2025
Our favorite engineer @DevinAI now also wears the hat of data analyst ever since we plugged them into Metabase. Let the querying begin/continue!
0
2
4
The era of fast coding models has begun: Cognition SWE-1.5: ~950 tok/s Cursor Composer-1: ~250 tok/s Haiku 4.5: ~140 tok/s
both cursor and windsurf released models today heavily optimized for speed this is very different than the direction people have been pushing where they kick stuff off to codex for 45min but it's fast feedback loops are always what end up mattering
29
33
611
We're hiring... If you're a full-stack engineer with experience building early product, we’d love to hear from you. Success Criteria A) You helped take a product from zero to one. B) You're maniacally opinionated and OCD — you care deeply about how things work and how they
2
2
3
super excited to finally release SWE-1.5 - a frontier-scale model (~hundreds of billions of params) running at insane speeds (up to 950 tok/s) - outperforms GPT-5-High on SWE-Bench-Pro - 13x faster than Sonnet 4.5, 6x faster than Haiku 4.5 - more than double the benchmark perf
Today we’re releasing SWE-1.5, our fast agent model. It achieves near-SOTA coding performance while setting a new standard for speed. Now available in @windsurf.
10
15
209
windsurf is the ide for devs that like to go fast.
Today we’re releasing SWE-1.5, our fast agent model. It achieves near-SOTA coding performance while setting a new standard for speed. Now available in @windsurf.
2
4
115
what are you waiting for
Today we’re releasing SWE-1.5, our fast agent model. It achieves near-SOTA coding performance while setting a new standard for speed. Now available in @windsurf.
2
2
54
first swe-grep now swe-1.5 should we give our gpus a break?
Today we’re releasing SWE-1.5, our fast agent model. It achieves near-SOTA coding performance while setting a new standard for speed. Now available in @windsurf.
4
1
82
don't write this off as "fast, non-frontier-lab model == dumb & not worth my time" it's smarter than the SOTA models were this summer and also way faster (more chances to iterate/fix in same time, less waiting) 1 pt of reference: SWE-1.5 > GPT-5 (high) on SWE-Bench Pro!
Today we’re releasing SWE-1.5, our fast agent model. It achieves near-SOTA coding performance while setting a new standard for speed. Now available in @windsurf.
23
9
235
caffeine and work party! bring ur laptops
1
2
24
coming soon: SWE-grep-1.5-mini-pro-gguf-4bit-preview-distil
Introducing SWE-grep and SWE-grep-mini: Cognition’s model family for fast agentic search at >2,800 TPS. Surface the right files to your coding agent 20x faster. Now rolling out gradually to Windsurf users via the Fast Context subagent – or try it in our new playground!
2
2
31
super excited to release SWE-grep and SWE-grep-mini! SWE-grep-mini achieves extreme inference speeds of >2,800 TPS: 20x faster than Haiku 4.5 while beating Sonnet 4.5, Opus 4.1 & GPT-5 on our CodeSearch eval our vision: make agentic search as fast as embedding search. the
24
22
352
Introducing SWE-grep and SWE-grep-mini: Cognition’s model family for fast agentic search at >2,800 TPS. Surface the right files to your coding agent 20x faster. Now rolling out gradually to Windsurf users via the Fast Context subagent – or try it in our new playground!
73
130
1K
as you experiment with @karpathy nanochat, use deepwiki for all your questions!
Excited to release new repo: nanochat! (it's among the most unhinged I've written). Unlike my earlier similar repo nanoGPT which only covered pretraining, nanochat is a minimal, from scratch, full-stack training/inference pipeline of a simple ChatGPT clone in a single,
4
9
352
gave devin an api key for GPUs and asked it to get nanochat up and running. pretty fun to be able to start training runs from my phone fun project: use the devin api to set up a pipeline where any time a cool paper comes out, spin up a devin to implement and reproduce results
1
2
38