morphllm Profile Banner
Morph Profile
Morph

@morphllm

Followers
2K
Following
847
Media
30
Statuses
284

ultra fast models that improve coding agents 10,500 tok/sec. https://t.co/dBQjovGya3

San Francisco, CA
Joined May 2025
Don't wanna be here? Send us removal request.
@morphllm
Morph
14 days
Introducing WarpGrep, a fast context subagent that improves coding agent performance. WarpGrep speeds up coding tasks 40% and reduces context rot by 70% on long horizon tasks by treating context retrieval as its own RL trained system. Inspired by Cognition’s SWE-Grep - we’re
62
84
1K
@morphllm
Morph
12 hours
we just shipped an update to WarpGrep that includes some breaking changes. to update, run npm cache clean --force && rm -rf ~/.npm/_npx and reload your coding agent
3
0
18
@morphllm
Morph
2 days
v1 soon!
@tejasybhakta
Tejas Bhakta
3 days
We’re releasing v1 of WarpGrep soon! This will be production quality and ready for teams to integrate. Our latest evals on this version put WarpGrep v1 ahead of Claude Haiku and SWE-Grep
0
0
15
@jcurtis
John Curtis
13 days
Hey @morphllm meet @FactoryAI , Factory meet Morph... I have Droids running with the new Morph MCP and ... it's glorious -- Details below on some edits I made to https://t.co/uYPguFyjPB and https://t.co/HHhllTEqke to make it stick better
9
11
115
@dhruvbhatia0
dhruv bhatia
9 days
gonna be posting some indepth evals super soon, but here's a teaser: - Stock Claude Code spent a minute on search, with warp grep it spent 14 seconds - Stock Claude Code injested 75k tokens, with warp grep it injested 3k - They both returned identical results (down to the line
@morphllm
Morph
14 days
Introducing WarpGrep, a fast context subagent that improves coding agent performance. WarpGrep speeds up coding tasks 40% and reduces context rot by 70% on long horizon tasks by treating context retrieval as its own RL trained system. Inspired by Cognition’s SWE-Grep - we’re
15
11
284
@tejasybhakta
Tejas Bhakta
13 days
I’ll be going over how we got sub 2s inference on ~90k code input (served on multiple b200s) in the next few days!
@morphllm
Morph
14 days
Introducing WarpGrep, a fast context subagent that improves coding agent performance. WarpGrep speeds up coding tasks 40% and reduces context rot by 70% on long horizon tasks by treating context retrieval as its own RL trained system. Inspired by Cognition’s SWE-Grep - we’re
9
3
161
@morphllm
Morph
12 days
10% speedup going out tmrw!
0
0
5
@dhruvbhatia0
dhruv bhatia
14 days
Super hyped to ship this. RL is notoriously compute inefficient (even more so with MoEs), here's a thread on how we got around it
@morphllm
Morph
14 days
Introducing WarpGrep, a fast context subagent that improves coding agent performance. WarpGrep speeds up coding tasks 40% and reduces context rot by 70% on long horizon tasks by treating context retrieval as its own RL trained system. Inspired by Cognition’s SWE-Grep - we’re
2
3
43
@morphllm
Morph
13 days
we'll be shipping updates often as training progresses! v1 coming soon!
1
0
13
@morphllm
Morph
14 days
You can install WarpGrep into Claude Code, Codex, or your favorite coding agent today! Use code BF16 for 40 million tokens of credit
Tweet card summary image
morphllm.com
One MCP. Plug into Cursor, Claude Code, or any agent. Faster edits, smarter retrieval, and better context.
5
3
96
@morphllm
Morph
14 days
SWE-Grep runs at around 650 tokens per second on Cerebras. WarpGrep hits around 900 tokens per second on B200. We worked closely with NVIDIA to optimize WarpGrep. CUDA gives us the stability and customization ability to do weird things stably.
1
0
51
@morphllm
Morph
14 days
We’re bullish on subagents that do parallel tool calls and making task specific inference engines for them. For WarpGrep: - a strict budget of 4 turns - up to 8 parallel tool calls per turn (grep, list, read, etc.) - Inference optimized for grep (super prefill heavy) a reward
1
0
35
@morphllm
Morph
14 days
We found that giving models access to warp grep boosts performance between 5-12% with frontier models, while improving speeds and reducing token usage up to 40% WarpGrep is our attempt to fix coding agents.@swyx  hit the nail on the head with the semi async valley of death.
2
1
43
@morphllm
Morph
14 days
Context pollution does more than just waste tokens, it actively kills model performance. In production repos it poisons the model’s thinking into editing files you didn’t want to touch, while costing exponentially more. The WarpGrep approach: have a inference optimized model
1
1
49
@morphllm
Morph
14 days
Coding Agents spend 60% of their time searching instead of coding. We built WarpGrep so people can have sub 10 second agents, without compromising on accuracy. Dev flow-state is important. P(bounce) increases 10% with each second a dev has to wait according to @swyx and
1
6
96
@tomosman
Tom Osman 🐦‍⬛
21 days
Create your own "starter" version of Lovable in under 6 minutes. Here's how 👇 You'll need API keys/accounts with: - @firecrawl_dev - @GeminiApp - @vercel - @morphllm p.s. keys have been rotated :)
@devdigest
Developers Digest
23 days
Introducing Open Lovable v3 - an open source AI design engineer 🔥 Enter a URL and agents will create an on brand clone you can build on top of. Powered by Gemini 3 Pro and @firecrawl_dev's new branding format. Try out the 100% open source example, link below!
3
3
22
@0thernet
ben (zo.computer)
22 days
@tejasybhakta @zocomputer ty for @morphllm - it's fire
2
1
9
@GandhiRohith
Rohith Gandhi
1 month
The coding agent in Pilot generates the feature flag code along with the UI updates and uses @morphllm to apply the edits to your files. Super-quick and accurate.
1
1
8
@morphllm
Morph
25 days
In 2026, coding agents will go mainstream to the masses. This won’t mean everyone will be vibecoding. Sufficiently fast and good codegen is the new modality of creative expression. State, reactivity, and interactivity will make disposable codegen miniapps a pillar of online
0
0
8
@FlyaKiet
Kiet
1 month
4 horsemen of AI leaked
1
3
19