Chris Clark Profile
Chris Clark

@cclark

Followers
881
Following
250
Media
68
Statuses
1K

COO @OpenRouterAI. Co-founder and former CTO @GroveCollab

Charleston, SC
Joined April 2007
Don't wanna be here? Send us removal request.
@cclark
Chris Clark
12 days
Love to see open weight models here.
@OpenRouterAI
OpenRouter
12 days
Qwen3 Coder has now passed Grok 4 in the Programming prompt rankings. Tied with Kimi!
Tweet media one
0
0
4
@cclark
Chris Clark
13 days
I like and value dedicated QA (and believe manual testing still has its place too) BUT if you do it wrong/too early without building a quality culture first, then you get abdication of responsibility from engineering. "I tested the happy path and it works, so here you go QA!".
0
0
3
@cclark
Chris Clark
13 days
When interviewing a candidate, the goal is to get to a strong yes or no. If the feedback is "they might be good; hard to tell", the interview failed. I fail ~25% of the time -- it's hard! But it's important to understand the goal, and to train your team for it.
0
0
1
@cclark
Chris Clark
15 days
My Lumina toothpaste finally arrived (frankly I kind of forgot I ordered it). But now I'm too scared to try it. And I do get cavities.
1
0
2
@cclark
Chris Clark
24 days
RT @ArtificialAnlys: Developers consider an average of 4.7 LLM families with OpenAI GPT/o, Google Gemini and Anthropic Claude being the mos….
0
2
0
@cclark
Chris Clark
24 days
RT @ArtificialAnlys: We’re releasing the Artificial Analysis AI Adoption Survey Report for H1 2025 based on >1,000 responses from developer….
0
36
0
@cclark
Chris Clark
24 days
RT @pingToven: Coming soon to OpenRouter 🔥.
0
5
0
@cclark
Chris Clark
24 days
Good judgement, not code, is the new bottleneck. And it's always been the thing in shortest supply.
@lennysan
Lenny Rachitsky
27 days
Product management is becoming the new bottleneck according to @AndrewYNg . "I don't see product management work becoming faster at the same speed as engineering. I'm seeing this ratio shift. Just yesterday, one of my teams came to me, and for the first time, when we're planning
1
0
0
@cclark
Chris Clark
30 days
The fact that diffusion LLMs exist & work is a pretty strong argument that LLMs are fundamentally different than human intelligence. I can sort of convince myself I might just be a next token predictor — but no f’ing way I’m a diffusion model!.
1
0
0
@cclark
Chris Clark
1 month
Weird that 'stringy' and 'stingy' are pronounced completely differently.
0
0
1
@cclark
Chris Clark
1 month
RT @PyythonEX: @lauriewired GPT posts: You're not ready for this next pro function .Gemini: Ship ship ship .Claude: The model just said it….
0
1
0
@cclark
Chris Clark
1 month
I liked ChatGPT’s advanced voice more when it was precise like JARVIS and not “yeah, well so, it is true that uhh…”.
2
0
3
@cclark
Chris Clark
1 month
At face value, there is a ~500x cost between the cheapest and the most expensive. But output tokens 3x more expensive than input, and high end models’ reasoning outputs, there’s another order of magnitude — the actual cost difference is close to 5000x!.
1
0
1
@cclark
Chris Clark
1 month
RT @deedydas: Karpathy at YC startup school calls this the transfer switch of AI. It's hard to keep up with all the LLMs out there. OpenRo….
0
66
0
@cclark
Chris Clark
1 month
Why is that all designers are expected to have a portfolio, but not engineers? They both make things.
2
0
2
@cclark
Chris Clark
2 months
RT @OpenRouterAI: Ranking of LLMs by the % of JSON violations we detected over the past week, for the top Structured Output requests. ⭐️ Qw….
0
87
0
@cclark
Chris Clark
2 months
This type of error is what drove the overpopulation panic of the 90s.
0
0
0
@cclark
Chris Clark
2 months
A sigmoid looks a lot like exponential growth, until suddenly it doesn’t. I think it’s likely that AI model progress is halfway up a sigmoid, vs exponentially gaining capability. This doesn’t make AI any less exciting, but does make me less worried about superintelligence.
1
0
1
@cclark
Chris Clark
2 months
The constant reinvention of front end frameworks has finally stopped because frameworks are now selected for their LLM legibility. New frameworks are definitionally outside of the training set and will therefore never catch on.
1
0
3