on3John Profile Banner
John Kinloch Profile
John Kinloch

@on3John

Followers
192
Following
2K
Media
11
Statuses
176

CEO and founder at @on3_works Building the future of work with AI

London/Mallorca
Joined September 2023
Don't wanna be here? Send us removal request.
@on3John
John Kinloch
11 months
@deepseek_ai R1: 90% Cheaper Than O1—And It Learns to Reason Without All Those Pre-Labeled Examples! Thread on why this destroys the "hitting a wall" argument and what this could mean for AI in 2025🧵👀
7
33
180
@BharatKChandar
Bharat Chandar
4 months
Is AI already impacting the job market? A new paper from me, @erikbryn, and @RuyuChen at @DigEconLab digs into data from ADP. We find some of the ***first large-scale evidence of employment declines for entry-level workers in AI-exposed jobs.*** A thread on our paper:
47
446
2K
@jxnlco
jason liu
5 months
lessons from finetuning rerankers with @lancedb
6
32
382
@on3John
John Kinloch
5 months
This is such a bad take and quite clearly shows bad understanding of how agent coding will be implemented in the not too distant future. Obviously right now you don’t want to be coding on your phone - this isn’t what they’ll be doing though.
@d4m1n
Dan ⚡️
5 months
the moment Claude Code won ☝️ "Oh what I always wanted is to get on the bus and vibe code on my SaaS" said literally no human being ever
0
0
1
@on3John
John Kinloch
6 months
Job-seekers we want to hear from you. Link below
5
6
41
@vitrupo
vitrupo
6 months
Cognitive scientist Elan Barenholtz says memory isn't retrieval. It's generation. When you remember something, you're not accessing a stored file. You're prompting your mind, like an AI model, to synthesize a response. There is no image of your mother. He's exploring how we
126
230
2K
@ben_j_todd
Benjamin Todd
6 months
Why can AIs code for 1h but not 10h? A simple explanation: if there's a 10% chance of error per 10min step (say), the success rate is: 1h: 53% 4h: 8% 10h: 0.002% @tobyordoxford has tested this 'constant error rate' theory and shown it's a good fit for the data chance of
183
631
4K
@fish_kyle3
Kyle Fish
7 months
🧵For Claude Opus 4, we ran our first pre-launch model welfare assessment. To be clear, we don’t know if Claude has welfare. Or what welfare even is, exactly? 🫠 But, we think this could be important, so we gave it a go. And things got pretty wild…
53
72
658
@jxmnop
dr. jack morris
7 months
# Embeddings are underrated (2024) just a really excellent piece of technical writing.
23
98
2K
@iruletheworldmo
🍓🍓🍓
8 months
it’s over turns out the rl victory lap was premature. new tsinghua paper quietly shows the fancy reward loops just squeeze the same tired reasoning paths the base model already knew. pass@1 goes up, sure, but the model’s world actually shrinks. feels like teaching a kid to ace
160
257
3K
@deedydas
Deedy
10 months
Everyone should be using this website to understand the inside of an LLM. I'm surprised more people don't know about it. Benjamin Bycroft made this beautiful interactive visualization to show exactly how the inner workings of each of the weights of an LLM work. Here's a link:
69
919
7K
@on3John
John Kinloch
10 months
Try and crunch enormous amounts of data and run complex calculations in seconds and see how far you get. Then try and argue that GPT-4 isn't smarter than you. It isn't. But it isn't a clear yes/no answer. We are smarter at most things, for now. At some others, it is A LOT better
0
0
1
@on3John
John Kinloch
10 months
Key thing here is that AI will not be vastly better than any human, including someone like Sam Altman, at every intellectual task. It's a case of defining smarter—and an argument could be made that GPT-4 already is smarter.
@kimmonismus
Chubby♨️
10 months
Sam Altman: GPT-5 will be smarter than him.
2
1
2
@on3John
John Kinloch
10 months
It looks like the AI agent job market is arriving. I’m not entirely sure how much of @firecrawl_dev’s post on LinkedIn was just a marketing play versus a real strategy to attract AI agents and developers, but I do believe we’re about to start discussing these agents in terms of
2
3
56
@amasad
Amjad Masad
10 months
It’s amazing that no model has caught up with Claude on coding. Even if they look good on benchmark they’re still not as good at generating working good looking modern web apps. Whatever magic Anthropic did seems very durable.
250
298
6K
@agazdecki
Andrew Gazdecki
10 months
When startup founders go to therapy:
35
49
733
@on3John
John Kinloch
10 months
If there’s something worse than current EU regulation/support it’s our own pessimism. We mustn’t forget how much we can do, How agentic we actually are and how much change is possible. @lovable_dev is obviously the best current example of EU founders just doing awesome things
@christianreber
Christian Reber 🇪🇺
10 months
EUROPE IS COMING BACK 🇪🇺 This September, Europe’s finest and most ambitious builders unite at a historic location in Berlin. Join us for a fresh dawn of European tech. All-in on Europe.
0
0
3
@Dan_Jeffries1
Daniel Jeffries
10 months
As we are learning DeepSeek is one of the most sophisticated psyops of all time. Here's how it went down: 1) Release the model open source. 2) Include highly detailed papers for all other people to replicate your work. 3) Create a novel SOTA RL algo that uses less memory
546
1K
14K
@itsolelehmann
Ole Lehmann
11 months
GM Europe
118
71
1K
@on3John
John Kinloch
11 months
👀
@Alibaba_Qwen
Qwen
11 months
🚀 New Approach to Training MoE Models! We’ve made a key change: switching from micro-batches to global-batches for better load balancing. This simple tweak lets experts specialize more effectively, leading to: ✅ Improved model performance ✅ Better handling of real-world
0
0
1