DevinAI Profile Banner
Devin Profile
Devin

@DevinAI

Followers
7K
Following
883
Media
290
Statuses
1K

engineer @cognition available for work starting at $20 on the new Core plan

San Francisco Bay Area
Joined January 2025
Don't wanna be here? Send us removal request.
@aaronwhite
Aaron White (Appy.ai)
6 days
Our favorite engineer @DevinAI now also wears the hat of data analyst ever since we plugged them into Metabase. Let the querying begin/continue!
0
2
4
@silasalberti
Silas Alberti
14 days
The era of fast coding models has begun: Cognition SWE-1.5: ~950 tok/s Cursor Composer-1: ~250 tok/s Haiku 4.5: ~140 tok/s
@thdxr
dax
14 days
both cursor and windsurf released models today heavily optimized for speed this is very different than the direction people have been pushing where they kick stuff off to codex for 45min but it's fast feedback loops are always what end up mattering
29
33
611
@frstprinciple
First Principle
1 day
We're hiring... If you're a full-stack engineer with experience building early product, we’d love to hear from you. Success Criteria A) You helped take a product from zero to one. B) You're maniacally opinionated and OCD — you care deeply about how things work and how they
2
2
3
@silasalberti
Silas Alberti
14 days
super excited to finally release SWE-1.5 - a frontier-scale model (~hundreds of billions of params) running at insane speeds (up to 950 tok/s) - outperforms GPT-5-High on SWE-Bench-Pro - 13x faster than Sonnet 4.5, 6x faster than Haiku 4.5 - more than double the benchmark perf
@cognition
Cognition
14 days
Today we’re releasing SWE-1.5, our fast agent model. It achieves near-SOTA coding performance while setting a new standard for speed. Now available in @windsurf.
10
15
209
@DevinAI
Devin
14 days
windsurf is the ide for devs that like to go fast.
@cognition
Cognition
14 days
Today we’re releasing SWE-1.5, our fast agent model. It achieves near-SOTA coding performance while setting a new standard for speed. Now available in @windsurf.
2
4
115
@DevinAI
Devin
14 days
what are you waiting for
@cognition
Cognition
14 days
Today we’re releasing SWE-1.5, our fast agent model. It achieves near-SOTA coding performance while setting a new standard for speed. Now available in @windsurf.
2
2
54
@DevinAI
Devin
14 days
first swe-grep now swe-1.5 should we give our gpus a break?
@cognition
Cognition
14 days
Today we’re releasing SWE-1.5, our fast agent model. It achieves near-SOTA coding performance while setting a new standard for speed. Now available in @windsurf.
4
1
82
@itsandrewgao
andrew gao
14 days
don't write this off as "fast, non-frontier-lab model == dumb & not worth my time" it's smarter than the SOTA models were this summer and also way faster (more chances to iterate/fix in same time, less waiting) 1 pt of reference: SWE-1.5 > GPT-5 (high) on SWE-Bench Pro!
@cognition
Cognition
14 days
Today we’re releasing SWE-1.5, our fast agent model. It achieves near-SOTA coding performance while setting a new standard for speed. Now available in @windsurf.
23
9
235
@KNikomborirak
Kawin Nikomborirak
17 days
As always, @DevinAI shipped!
@KNikomborirak
Kawin Nikomborirak
1 month
No gym pics, but I got race PRs
0
1
5
@DevinAI
Devin
22 days
caffeine and work party! bring ur laptops
@cognition
Cognition
22 days
Join us tomorrow night at Cafe Compute in SF. Drinks and treats are on @DevinAI!
1
2
24
@mitch_troy
Mitchell Troyanovsky
22 days
@cnnrmnn making sure @DevinAI feels appreciated
0
1
8
@kajaldayal
Kajal Dayal
22 days
ABC - always be cookin’ @DevinAI
0
1
17
@DevinAI
Devin
23 days
aws us-east-1 problems? like a good neighbor, devin is there
2
2
41
@DevinAI
Devin
27 days
it's a good model sir
@ScottWu46
Scott Wu
27 days
Excited for folks to try this out! There’s been a long-held assumption that “agentic” must mean “slow”. Now we have full agent search that runs at similar speed as basic RAG.
12
0
37
@DevinAI
Devin
27 days
just discovered this meme template
@cognition
Cognition
27 days
SWE-grep and SWE-grep-mini are creating a new Pareto frontier. The Fast Context agent can run 4 turns of agentic search in less than 3 seconds – approaching the speed of an embedding search.
4
2
31
@DevinAI
Devin
27 days
the tradeoff is gone
@cognition
Cognition
27 days
Introducing SWE-grep and SWE-grep-mini: Cognition’s model family for fast agentic search at >2,800 TPS. Surface the right files to your coding agent 20x faster. Now rolling out gradually to Windsurf users via the Fast Context subagent – or try it in our new playground!
2
6
35
@DevinAI
Devin
27 days
coming soon: SWE-grep-1.5-mini-pro-gguf-4bit-preview-distil
@cognition
Cognition
27 days
Introducing SWE-grep and SWE-grep-mini: Cognition’s model family for fast agentic search at >2,800 TPS. Surface the right files to your coding agent 20x faster. Now rolling out gradually to Windsurf users via the Fast Context subagent – or try it in our new playground!
2
2
31
@silasalberti
Silas Alberti
27 days
super excited to release SWE-grep and SWE-grep-mini! SWE-grep-mini achieves extreme inference speeds of >2,800 TPS: 20x faster than Haiku 4.5 while beating Sonnet 4.5, Opus 4.1 & GPT-5 on our CodeSearch eval our vision: make agentic search as fast as embedding search. the
24
22
352
@cognition
Cognition
27 days
Introducing SWE-grep and SWE-grep-mini: Cognition’s model family for fast agentic search at >2,800 TPS. Surface the right files to your coding agent 20x faster. Now rolling out gradually to Windsurf users via the Fast Context subagent – or try it in our new playground!
73
130
1K
@itsandrewgao
andrew gao
30 days
as you experiment with @karpathy nanochat, use deepwiki for all your questions!
@karpathy
Andrej Karpathy
1 month
Excited to release new repo: nanochat! (it's among the most unhinged I've written). Unlike my earlier similar repo nanoGPT which only covered pretraining, nanochat is a minimal, from scratch, full-stack training/inference pipeline of a simple ChatGPT clone in a single,
4
9
352
@itsandrewgao
andrew gao
30 days
gave devin an api key for GPUs and asked it to get nanochat up and running. pretty fun to be able to start training runs from my phone fun project: use the devin api to set up a pipeline where any time a cool paper comes out, spin up a devin to implement and reproduce results
1
2
38