Matt Rickard Profile
Matt Rickard

@mattrickard

Followers
10K
Following
9K
Media
606
Statuses
5K

Working on @standardinputhq Before: StanfordGSB, Kubernetes @ Google, Blackstone.

Stanford, CA
Joined June 2009
Don't wanna be here? Send us removal request.
@steventey
Steven Tey
2 months
You might've heard about the hiring advice "hire for slope, not y-intercept" Same thing goes for software vendors – bet on fast-moving startups that care about craft, design & reliability. Your future self will thank you. Some examples: ✦ bet on @linear vs jira ✦ bet on
@domwhyte42
Dominic Whyte
2 months
I think people underrate shipping velocity when choosing what tools to rely on. It’s common wisdom in hiring: better to recruit someone with a bit less experience but a really steep learning curve. In 6 months, they’ll be way ahead of the pack. The same applies to products.
23
23
247
@mattrickard
Matt Rickard
2 months
really like the idea of running parallel claudes in containers!
@kanjun
Kanjun 🐙
2 months
Sculptor: the missing UI for Claude Code 🎨 Imagine running 5 Claudes in parallel, safely in containers, while you stay in flow. Then bring their work straight into your IDE to test/edit together. This is how one developer ships like a team. Try it with Sonnet 4.5!
1
0
6
@mattrickard
Matt Rickard
3 months
current solutions are good, but have some shortcomings Unsupervised agents (Codex Web / Claude Code GitHub Actions). Short feedback loops make Claude Code great. If it makes a wrong turn, you can interrupt and get it back on the right path. Codex Web and Claude Code GitHub
0
0
0
@mattrickard
Matt Rickard
3 months
Parallelism tax Even with automation, the setup/clean-up grind is tedious. Worktrees share the same git object store, so you still need to be careful with operations and cleanup. Managing claude in docker means that I need to mount files, move around secrets, and manage
1
0
0
@ajambrosino
Andrew Ambrosino
3 months
my morning on Vibe Coding Pro. codex feels illegal.
@VictorTaelin
Taelin
3 months
BTW, I've basically stopped using Opus entirely and I now have several Codex tabs with GPT-5-high working on different tasks across the 3 codebases (HVM, Bend, Kolmo). Progress has never been so intense. My job now is basically passing well-specified tasks to Codex, and reviewing
5
6
104
@solomonstre
Solomon Hykes
8 months
The distinction between "agent" and "tools" is arbitrary and meaningless. Architectures built on that distinction are dead ends. In the end it's all software: sometimes with LLM, sometimes without. I'm sure someone will soon put an agent inside a MCP and feel like a genius.
13
10
81
@thdxr
dax
9 months
we are working on OpenControl - a library that uses AI to finally solve the headache of internal tooling idea is simple - define a set of tools in code that can reference functions from your codebase or anywhere else these tools get deployed to your infra (runs anywhere,
46
39
744
@sundarpichai
Sundar Pichai
1 year
Introducing Veo 2, our new, state-of-the-art video model (with better understanding of real-world physics & movement, up to 4K resolution). You can join the waitlist on VideoFX. Our new and improved Imagen 3 model also achieves SOTA results, and is coming today to 100+ countries
520
1K
13K
@Ahmad_Al_Dahle
Ahmad Al-Dahle
1 year
Introducing Llama 3.3 – a new 70B model that delivers the performance of our 405B model but is easier & more cost-efficient to run. By leveraging the latest advancements in post-training techniques including online preference optimization, this model improves core performance at
175
472
3K
@mattrickard
Matt Rickard
1 year
interesting analogy
@alexalbert__
Alex Albert
1 year
Like LSP did for IDEs, we're building MCP as an open standard for LLM integrations. Build your own servers, contribute to the protocol, and help shape the future of AI integrations:
1
0
7
@mattrickard
Matt Rickard
1 year
Interesting approaches on making structured generation more efficient
@yi_xin_dong
Yixin Dong
1 year
🚀✨Introducing XGrammar: a fast, flexible, and portable engine for structured generation! 🤖Accurate JSON/grammar generation ⚡️3-10x speedup in latency 🤝Easy LLM engine integration ✅ Now in MLC-LLM, SGLang, WebLLM; vLLM & HuggingFace coming soon! https://t.co/2yUCyTUVzL
0
0
4
@mattrickard
Matt Rickard
1 year
Should check this out! TS as a first class citizen
@calcsam
Sam Bhagwat
1 year
Excited to share that @smthomas3, Abhi Aiyer and I are building Mastra, a Typescript AI framework for the next million AI developers:
0
0
2
@mattrickard
Matt Rickard
1 year
happy franksgiving
0
1
8
@jeremyphoward
Jeremy Howard
1 year
Wow there is a *lot* of llms.txt activity happening now. Like… a lot lot… https://t.co/oPd8Zi1UFH
22
60
563
@mattrickard
Matt Rickard
1 year
my offering to the llm overlords 491,946 tokens (all 936 of my posts) https://t.co/4903WdQPqg
0
0
9
@mattrickard
Matt Rickard
1 year
@alexalbert__
Alex Albert
1 year
Friday docs feature drop: You can now access all of our docs concatenated as a single plain text file that can be fed in to any LLM. Here's the url route: https://t.co/ILkO9q4fyk
0
0
7
@charliermarsh
Charlie Marsh
1 year
The secret behind my productivity is that I'm almost exclusively a print-statement debugger
59
85
2K
@mitchellh
Mitchell Hashimoto
1 year
Thanks to a couple contributors, Ghostty has built-in VCS branching characters. Kitty introduced this[1] and I think Ghostty is the second terminal to implement it. The cool detail in this one: for the end markers they're drawn with a true fade (see line 13). [1]:
33
34
1K
@paulgb
Paul Butler
1 year
We've been shipping https://t.co/nigREmLzaK as an open-source project for over a year. Now you can deploy it on @JamsocketHQ as a natively-supported service, without touching a Dockerfile!
Tweet card summary image
github.com
A realtime CRDT-based document store, backed by S3. - jamsocket/y-sweet
@JamsocketHQ
Jamsocket
1 year
Today we're launching Y-Sweet on Jamsocket! Y-Sweet is a Yjs sync server + document store that makes building realtime applications like Google Docs easy. https://t.co/6t183FwfWp
1
5
27
@irvinebroque
Brendan Irvine-Broque
1 year
We built a platform to run containers across Cloudflare’s network. It’s different. It’s schedules workloads globally — you don’t deal with regions. It fits with Anycast (of course). And we're integrating it with Workers and Durable Objects — use the right tool for the job, every
9
30
259