0xLienid Profile Banner
Matt Profile
Matt

@0xLienid

Followers
1K
Following
13K
Media
347
Statuses
8K

Building Living Machines @Noetic_Labs

Joined June 2021
Don't wanna be here? Send us removal request.
@0xLienid
Matt
16 hours
>tricky ml problem >add noise >everything gets better
Tweet media one
0
0
0
@0xLienid
Matt
2 days
Dead Intellectual Insight Theory
@ns123abc
NIK
3 days
OpenAI co-founder: “gpt-5 pro for novel mathematics — in partnership with a math professor” Mathematicians: “At first glance, this might appear useful for an exploratory phase, helping us save time. In practice, however, it was quite the opposite” > only seems to support
Tweet media one
0
0
3
@0xLienid
Matt
2 days
You have to remember. The Universal Approximation Theorem describes what the model CAN represent. It is the job of YOU, the member of the technical staff, to make sure that the approximation it finds is Good.
0
0
0
@0xLienid
Matt
2 days
Agree with the premise, disagree with the conclusion. One of the core problems is the sharpness of the loss landscape where the parameters end up. The sharper it is, the more haywire predictions go as you move beyond the data distribution. This sharpness is, in my opinion, also
Tweet media one
@fchollet
François Chollet
3 days
When you store your knowledge and skills as parametric curves (as all deep learning models do), the only way you can generalize is via interpolation on the curve. The problem is that interpolated points *correlate* with the truth but have no *causal* link to the truth. Hence
1
0
0
@0xLienid
Matt
2 days
A fundamental, horrific, misunderstanding of why OOD is important
@khoomeik
Rohan Pandey
3 days
many fear that "RL can't generalize OOD" even if true, this no longer matters if you just bring all the tasks you care about in distribution that's more or less what happened in pretraining and it seems to have worked out pretty well
0
0
2
@0xLienid
Matt
4 days
You think God is a curve over a static data distribution? Stop it. Seek help.
2
2
8
@0xLienid
Matt
6 days
You need to be more Bob Noyce pilled You cannot be Bob Noyce pilled enough actually
Tweet media one
0
0
2
@0xLienid
Matt
6 days
Deep Blue did not yield AGI. RL for math and code will see the same fate. Such is the Generalization Gap.
0
0
2
@0xLienid
Matt
6 days
Math and code are much like chess in the 90s. The Ruling Technological Class views them as highmarks of their Intelligence. Therefore, they believe, if we get really great at them, we must have discovered something fundamental about Intelligence.
@mikeknoop
Mike Knoop
6 days
Pay close attention to code and math domains because they're leading indicators of how good RL-trained AI reasoning systems can get with current ideas.
1
0
2
@0xLienid
Matt
7 days
The number one opportunity is accelerating the flow of state-of-the-art results and procedures
@vaibhavbetter
Vaibhav Domkundwar
7 days
The AI opportunity in US healthcare in one simple chart.
Tweet media one
0
0
1
@0xLienid
Matt
7 days
Wow. Wonder who could be working on that...
@RichardSSutton
Richard Sutton
7 days
Dwarkesh Patel is 100% right on this: AI's utility is very strongly dependent on continual learning. https://t.co/YR54QlaqZK
0
1
6
@0xLienid
Matt
8 days
the frontier labs are committed to benchmark maxxing to keep the capital flowing for their ridiculous capex martingale
@khoomeik
Rohan Pandey
8 days
the frontier labs are committed to making models that are capable of doing science but science is an iterative *empirical* process, a smart new model alone won’t revolutionize it it’s now up to scientists to apply these models *in the lab* to answer humanity’s biggest questions
0
0
2
@0xLienid
Matt
8 days
One week after Dario openly admits their whole strategy is a braindead martingale strategy to buy depreciating assets to build a commodity See ya October 21st
@tbpn
TBPN
8 days
BREAKING: @AnthropicAI has raised a $13B Series F round led by @ICONIQCapital, with @Fidelity and @lightspeedvp as co-leads. This round raises their valuation to $183 billion.
Tweet media one
0
0
1
@0xLienid
Matt
9 days
It’s astonishing how much AI philosophizing from the early-mid 2010s has turned out to be absolutely useless drivel The product of a fetish for one’s own intellect, and a brain turned to mush by middling sci-fi
0
1
6
@0xLienid
Matt
9 days
Broke and fucked High lifestyle with a fake mysterious source of income The two faces of Gen Z unemployment
@iamgingertrash
simp 4 satoshi
10 days
Extreme unemployment Gov stats are faked Expect revisions Getting worse Most people Have two jobs Cannot pay rent Stagflation is near Buckle up
0
0
3
@0xLienid
Matt
11 days
Why did OpenAI make “Chart Crimes” their core branding
@aidan_mclau
Aidan McLaughlin
11 days
the jump from gpt4 -> gpt5 was obviously larger than the jump from gpt3 -> gpt4
Tweet media one
0
1
4
@0xLienid
Matt
12 days
You can do cool things by viewing these concepts as separate
0
0
1
@0xLienid
Matt
13 days
Does it bother noone else that eot tokens are used to both accumulate latent and mark end of text as an actual token? So much of the latent space dedicated to one of the easier tasks, marking the end of a thought because of confoundment.
1
0
3
@mikeknoop
Mike Knoop
17 days
Current AI can be trained to solve any problem given sufficient human motivation (to organize capital, labor, and data). It requires no further AI progress. But cost exceeds motivation for most problems. Increasing AI learning efficiency makes more problems economically viable.
13
8
123
@0xLienid
Matt
19 days
has anyone tried this in language?
@hturan
harley turan
20 days
exploring paths through latent space by interpolating between known embeddings
2
0
4