Richard Blythman
@richardblythman
Followers
2K
Following
2K
Media
99
Statuses
1K
Founder @NapthaAI | Machine Learning Engineer | AI Engineer | Multi-Agent Systems Researcher | Fluid Dynamicist
Dublin City, Ireland
Joined August 2011
anyone know a good sauna + cold plunge in SF (ideally not a full spa)?
2
0
1
5/ Engineering shifts from features to discovery infrastructure. Competitive advantage shifts from capabilities to guidance quality. The companies that win aren't the ones who build the most. They're the ones who onboard the best. How are you using AI to improve user onboarding?
0
0
0
4/ Traditional onboarding doesn't scale. You can't write docs for 500 features. You can't create tutorials for every workflow. The only thing that scales is automated, intelligent onboarding: products that learn what each user is trying to accomplish, what they're ready to learn
1
0
0
3/ If every company can build everything, your competitor replicates your features instantly. Defensibility is in how quickly you get users to value. Adoption depth becomes the moat. Not how many features you have, but how many features users actually use. Shallow adoption means
1
0
0
2/ When software was expensive to build, you built less. Onboarding was manageable. Your moat was the product itself. But when software is cheap to build, you build everything. Users don't onboard at AI speed though. They onboard at human speed. Ship 10x more features while users
1
0
0
1/ We used to compete on what we could build. Now we compete on what users can figure out. AI coding tools collapsed the cost of software creation. We've 10x'd productivity. Since we haven't 10x'd onboarding, does that become the place where companies will compete and
2
0
4
We talk about "developer experience" like it's something we design, but most of the time we're just writing about it. We've convinced ourselves that good DX means good documentation. But documentation is just content *about* the experience. It's not the experience itself. The
1
1
5
More agent protocols for tool calling and agent-to-agent communication are coming out. What protocols might be next as we move towards higher levels of abstraction? What would protocols for orchestration, business frameworks, or finding product-market fit look like? Business
0
1
7
film at palace of fine arts tomorrow about a man's 800 year quest to build an artificial planet from scratch. anyone game?
0
0
3
Starting to use some more advanced features of my voice app Wispr Flow, and it's making me think about something deeper: What's the maximum throughput between human intention and digital execution? I press a key, speak a command, and the app inserts text at my cursor. Pressing a
1
0
9
What lies beyond the attention economy? Are we entering the experience economy? I'm starting to value continuity of context over raw processing power. A Claude Code session that remembers our previous conversations and builds on our shared history is worth more than a fresh AI
3
2
9
There's something profound about voicing thoughts directly to your computer. I've been speaking to Claude Code to manage documents, organize files, and navigate my work, and it's changed everything about how intimate computing feels. It's 5-6x faster than typing, yes. But speed
1
0
7
5/ Contributions and suggestions very welcome. What should we prioritize next? Drop your ideas in the issues tab or reply here. The future of AI coding agents depends on how well they understand YOUR library.
0
0
0
4/ Next up, we’re planning to add more: • Coding agents • Ways of providing docs as context (e.g. Mintlify vs Cursor doc search) • Benchmark tasks (e.g. use of APIs via API docs) • Metrics We're also working on automating in-editor testing and maybe even using an MCP
1
0
0
3/ The problem? Existing benchmarks focus on self-contained snippets, not real library usage. We asked 50+ developer-focused companies: "Do you know how well agents use your library or API?" Most didn't. That's what we're fixing.
1
0
0
2/ For those just catching up: StackBench tests how well AI coding agents (like Claude Code, and now Cursor) use your library by: • Parsing your documentation automatically • Extracting real usage examples • Having agents generate those examples from a spec from scratch •
docs.stackbench.ai
Benchmark coding agents on library-specific tasks through local deployment and open source community collaboration
1
0
0
1/ Last week we made it to the front page of HN asking how well do coding agents use your libraries? The response was great overall, but many wanted to see the code. StackBench is Now Open Source
github.com
Contribute to NapthaAI/openstackbench development by creating an account on GitHub.
2
7
16
i dig into how we are using claude code for non-technical organizational work in this episode of the podcast.
ONESHOTTED @dongossen @HoodAndLedger @richardblythman
@NapthaAI @Nevermined_ai 08/22/2025 We delve (ahem) into if we have, indeed, been oneshotted, by @_opencv_'s definition. But hey, who hasn't? Major CEOs, politicians, the White House, even Randy from South Park got got
0
0
5