
Matt
@0xLienid
Followers
1K
Following
13K
Media
347
Statuses
8K
Building Living Machines @Noetic_Labs
Joined June 2021
Dead Intellectual Insight Theory
OpenAI co-founder: “gpt-5 pro for novel mathematics — in partnership with a math professor” Mathematicians: “At first glance, this might appear useful for an exploratory phase, helping us save time. In practice, however, it was quite the opposite” > only seems to support
0
0
3
You have to remember. The Universal Approximation Theorem describes what the model CAN represent. It is the job of YOU, the member of the technical staff, to make sure that the approximation it finds is Good.
0
0
0
Agree with the premise, disagree with the conclusion. One of the core problems is the sharpness of the loss landscape where the parameters end up. The sharper it is, the more haywire predictions go as you move beyond the data distribution. This sharpness is, in my opinion, also
When you store your knowledge and skills as parametric curves (as all deep learning models do), the only way you can generalize is via interpolation on the curve. The problem is that interpolated points *correlate* with the truth but have no *causal* link to the truth. Hence
1
0
0
A fundamental, horrific, misunderstanding of why OOD is important
many fear that "RL can't generalize OOD" even if true, this no longer matters if you just bring all the tasks you care about in distribution that's more or less what happened in pretraining and it seems to have worked out pretty well
0
0
2
You think God is a curve over a static data distribution? Stop it. Seek help.
2
2
8
You need to be more Bob Noyce pilled You cannot be Bob Noyce pilled enough actually
0
0
2
Deep Blue did not yield AGI. RL for math and code will see the same fate. Such is the Generalization Gap.
0
0
2
Math and code are much like chess in the 90s. The Ruling Technological Class views them as highmarks of their Intelligence. Therefore, they believe, if we get really great at them, we must have discovered something fundamental about Intelligence.
Pay close attention to code and math domains because they're leading indicators of how good RL-trained AI reasoning systems can get with current ideas.
1
0
2
The number one opportunity is accelerating the flow of state-of-the-art results and procedures
0
0
1
Wow. Wonder who could be working on that...
Dwarkesh Patel is 100% right on this: AI's utility is very strongly dependent on continual learning. https://t.co/YR54QlaqZK
0
1
6
the frontier labs are committed to benchmark maxxing to keep the capital flowing for their ridiculous capex martingale
the frontier labs are committed to making models that are capable of doing science but science is an iterative *empirical* process, a smart new model alone won’t revolutionize it it’s now up to scientists to apply these models *in the lab* to answer humanity’s biggest questions
0
0
2
One week after Dario openly admits their whole strategy is a braindead martingale strategy to buy depreciating assets to build a commodity See ya October 21st
BREAKING: @AnthropicAI has raised a $13B Series F round led by @ICONIQCapital, with @Fidelity and @lightspeedvp as co-leads. This round raises their valuation to $183 billion.
0
0
1
It’s astonishing how much AI philosophizing from the early-mid 2010s has turned out to be absolutely useless drivel The product of a fetish for one’s own intellect, and a brain turned to mush by middling sci-fi
0
1
6
Does it bother noone else that eot tokens are used to both accumulate latent and mark end of text as an actual token? So much of the latent space dedicated to one of the easier tasks, marking the end of a thought because of confoundment.
1
0
3
Current AI can be trained to solve any problem given sufficient human motivation (to organize capital, labor, and data). It requires no further AI progress. But cost exceeds motivation for most problems. Increasing AI learning efficiency makes more problems economically viable.
13
8
123