John Kinloch
@on3John
Followers
192
Following
2K
Media
11
Statuses
176
CEO and founder at @on3_works Building the future of work with AI
London/Mallorca
Joined September 2023
@deepseek_ai R1: 90% Cheaper Than O1—And It Learns to Reason Without All Those Pre-Labeled Examples! Thread on why this destroys the "hitting a wall" argument and what this could mean for AI in 2025🧵👀
7
33
180
Is AI already impacting the job market? A new paper from me, @erikbryn, and @RuyuChen at @DigEconLab digs into data from ADP. We find some of the ***first large-scale evidence of employment declines for entry-level workers in AI-exposed jobs.*** A thread on our paper:
47
446
2K
This is such a bad take and quite clearly shows bad understanding of how agent coding will be implemented in the not too distant future. Obviously right now you don’t want to be coding on your phone - this isn’t what they’ll be doing though.
the moment Claude Code won ☝️ "Oh what I always wanted is to get on the bus and vibe code on my SaaS" said literally no human being ever
0
0
1
Cognitive scientist Elan Barenholtz says memory isn't retrieval. It's generation. When you remember something, you're not accessing a stored file. You're prompting your mind, like an AI model, to synthesize a response. There is no image of your mother. He's exploring how we
126
230
2K
Why can AIs code for 1h but not 10h? A simple explanation: if there's a 10% chance of error per 10min step (say), the success rate is: 1h: 53% 4h: 8% 10h: 0.002% @tobyordoxford has tested this 'constant error rate' theory and shown it's a good fit for the data chance of
183
631
4K
🧵For Claude Opus 4, we ran our first pre-launch model welfare assessment. To be clear, we don’t know if Claude has welfare. Or what welfare even is, exactly? 🫠 But, we think this could be important, so we gave it a go. And things got pretty wild…
53
72
658
# Embeddings are underrated (2024) just a really excellent piece of technical writing.
23
98
2K
it’s over turns out the rl victory lap was premature. new tsinghua paper quietly shows the fancy reward loops just squeeze the same tired reasoning paths the base model already knew. pass@1 goes up, sure, but the model’s world actually shrinks. feels like teaching a kid to ace
160
257
3K
Everyone should be using this website to understand the inside of an LLM. I'm surprised more people don't know about it. Benjamin Bycroft made this beautiful interactive visualization to show exactly how the inner workings of each of the weights of an LLM work. Here's a link:
69
919
7K
Try and crunch enormous amounts of data and run complex calculations in seconds and see how far you get. Then try and argue that GPT-4 isn't smarter than you. It isn't. But it isn't a clear yes/no answer. We are smarter at most things, for now. At some others, it is A LOT better
0
0
1
Key thing here is that AI will not be vastly better than any human, including someone like Sam Altman, at every intellectual task. It's a case of defining smarter—and an argument could be made that GPT-4 already is smarter.
2
1
2
It looks like the AI agent job market is arriving. I’m not entirely sure how much of @firecrawl_dev’s post on LinkedIn was just a marketing play versus a real strategy to attract AI agents and developers, but I do believe we’re about to start discussing these agents in terms of
2
3
56
It’s amazing that no model has caught up with Claude on coding. Even if they look good on benchmark they’re still not as good at generating working good looking modern web apps. Whatever magic Anthropic did seems very durable.
250
298
6K
If there’s something worse than current EU regulation/support it’s our own pessimism. We mustn’t forget how much we can do, How agentic we actually are and how much change is possible. @lovable_dev is obviously the best current example of EU founders just doing awesome things
EUROPE IS COMING BACK 🇪🇺 This September, Europe’s finest and most ambitious builders unite at a historic location in Berlin. Join us for a fresh dawn of European tech. All-in on Europe.
0
0
3
As we are learning DeepSeek is one of the most sophisticated psyops of all time. Here's how it went down: 1) Release the model open source. 2) Include highly detailed papers for all other people to replicate your work. 3) Create a novel SOTA RL algo that uses less memory
546
1K
14K