HadrianVeidt0 Profile Banner
Hadrian Veidt Profile
Hadrian Veidt

@HadrianVeidt0

Followers
368
Following
1K
Media
86
Statuses
1K

AI // Entropy // Hubris // Information // Systems

Joined April 2025
Don't wanna be here? Send us removal request.
@HadrianVeidt0
Hadrian Veidt
21 hours
I said this a month ago, though it's not necessarily novel.
@karpathy
Andrej Karpathy
21 hours
Something I think people continue to have poor intuition for: The space of intelligences is large and animal intelligence (the only kind we've ever known) is only a single point, arising from a very specific kind of optimization that is fundamentally distinct from that of our
0
0
0
@HadrianVeidt0
Hadrian Veidt
1 day
Competition is the accelerant of innovation.
@btibor91
Tibor Blaho
2 days
"Altman last month assured staff that OpenAI would gain ground in the coming months, including with a new LLM, codenamed Shallotpeat. In developing that model, OpenAI aims to fix bugs it has encountered in the pretraining process, according to a person with knowledge of the
0
0
1
@HadrianVeidt0
Hadrian Veidt
2 days
I am feeling the AGI @apples_jimmy Nano Banana is much more than a simple image model.
1
1
53
@HadrianVeidt0
Hadrian Veidt
2 days
"Unlike previous AI systems that plateau after a few hours, Locus maintains consistent performance improvement up to several days by orchestrating thousands of experiments simultaneously. This massive parallelization enables a new type of scientific process, one that facilitates
@IntologyAI
Intology
3 days
Introducing Locus: the first AI system to outperform human experts at AI R&D Locus conducts research autonomously over multiple days and achieves superhuman results on RE-Bench given the same resources as humans, as well as SOTA performance on GPU kernel & ML engineering tasks.
0
0
1
@HadrianVeidt0
Hadrian Veidt
3 days
On GPT-5.1 Pro: “Put simply, it is just scary smart. It feels like a better reasoner than most humans. I fully expect to see examples in the coming days of it solving problems people assumed were far out of bounds for today's AI systems.”
@mattshumer_
Matt Shumer
3 days
I've had access to GPT-5.1 Pro for the last week. It's a fucking monster... easily the most capable and impressive model I've ever used. But it's not all positive. Here's my review of GPT-5.1 Pro: https://t.co/kFvRZRjpWy
9
13
225
@HadrianVeidt0
Hadrian Veidt
3 days
Exponential continues.
0
0
0
@HadrianVeidt0
Hadrian Veidt
3 days
"Pretraining hasn't hit a wall, and neither has test-time compute."
@polynoamial
Noam Brown
3 days
Today we at @OpenAI are releasing GPT-5.1-Codex-Max, which can work autonomously for more than a day over millions of tokens. Pretraining hasn't hit a wall, and neither has test-time compute. Congrats to my teammates @kevinleestone & @mikegmalek for helping to make it possible!
0
0
1
@HadrianVeidt0
Hadrian Veidt
3 days
GPT-5.1 Codex Max worked autonomously for 24+ hours.
@HadrianVeidt0
Hadrian Veidt
3 days
Holy shit. ~Infinite context windows have arrived.
0
0
1
@HadrianVeidt0
Hadrian Veidt
3 days
Holy shit. ~Infinite context windows have arrived.
0
0
0
@HadrianVeidt0
Hadrian Veidt
3 days
All signs point to AlphaEvolve quietly shaping parts of Google’s Ironwood TPU, the silicon used for training Gemini 3. An AI designed the circuits for a chip that in turn became the engine for a more powerful AI model. The machine is building the machine.
@jenzhuscott
Jen Zhu
4 days
Wait, Gemini 3 was trained on TPUs?!
0
0
5
@ChrisJBakke
Chris Bakke
4 days
It's November 18, 2026. You check your timeline: 3:07pm: Elon announces Grok 7.8198 - the world's most powerful model. at 3:09pm: Sam announces GPT 9.2081 - the *new* world's most powerful model. at 3:10pm: Sundar announces Gemini 6.3902 - the *new, new* world's most
132
201
4K
@HadrianVeidt0
Hadrian Veidt
4 days
This flew under the radar today. AI model access to Google Scholar. It's small things like this that will quietly accelerate science and discovery while very few realize it.
1
0
3
@HadrianVeidt0
Hadrian Veidt
4 days
Apps for AI. Interesting concept. Will agents pay for intermediate layer sub goal resolution? I think so. Assuming ultimate transaction layer is owned by the Amazons, OpenAIs, etc.
@VictorTaelin
Taelin
4 days
probably dumb take but I think that, in a not-so-far future, people will make a lot of money creating apps, and even companies, with no human clients, consumed entirely by bots with money
0
0
0
@HadrianVeidt0
Hadrian Veidt
4 days
"No walls in sight!"
@OriolVinyalsML
Oriol Vinyals
4 days
The secret behind Gemini 3? Simple: Improving pre-training & post-training 🤯 Pre-training: Contra the popular belief that scaling is over—which we discussed in our NeurIPS '25 talk with @ilyasut and @quocleix—the team delivered a drastic jump. The delta between 2.5 and 3.0 is
0
0
0
@HadrianVeidt0
Hadrian Veidt
4 days
On Gemini 3: Vending Bench jumps out at me because it ties model performance directly to economic value. Gemini 3 Pro is earning several times the net worth of the other frontier models on that test. In my view, a model that can reliably maximize profit across long horizon, tool
0
0
1
@HadrianVeidt0
Hadrian Veidt
4 days
Gemini 3 benchmarks look very impressive.
@Teknium
Teknium (e/λ)
4 days
Gemini 3 leak - some crazy improvements on math, screen understanding, and simpleqa.. somehow beaten by sonnet on swebench but winning on terminalbench Lower context length than 2.5 pro too 🫣 https://t.co/qQah8GPDIV
0
0
1