Rob S.
@RobS142
Followers
291
Following
826
Media
36
Statuses
2K
prediction markets are quickly moving from interesting technical novelty to demonic force in the world. pic unrelated
22
46
941
One of the few real use cases of blockchain imo
Anatar will manufacture millions of products. While we cannot release a catalog, every item produced by Anatar or a Loom network facility includes our Loom tag. Scan it to access the product's Digital Product Passport (DPP) and instantly verify its provenance and value.
0
0
1
I know in AI time it's 8793 years ago, but when o3 ARC-AGI score was released last Dec, I got tons of texts "wow amazing" but also despondent for humanity and what it meant for eg their kids future. GPT 5.2 beat o3 at a 99.8% lower cost per task and it's like cool, what's next.
A year ago, we verified a preview of an unreleased version of @OpenAI o3 (High) that scored 88% on ARC-AGI-1 at est. $4.5k/task Today, we’ve verified a new GPT-5.2 Pro (X-High) SOTA score of 90.5% at $11.64/task This represents a ~390X efficiency improvement in one year
19
30
537
A year ago, we verified a preview of an unreleased version of @OpenAI o3 (High) that scored 88% on ARC-AGI-1 at est. $4.5k/task Today, we’ve verified a new GPT-5.2 Pro (X-High) SOTA score of 90.5% at $11.64/task This represents a ~390X efficiency improvement in one year
149
652
4K
you can just finetune qwen3 and find a fancy name and open an “autoformalization company aiming for math superintelligence” and get some random brokie math guys on staff and raise money btw, there’s still a couple of months before this trick fades
14
7
202
Truly bizarre take. There are many reasons to slow AI development. Doing so to ensure scientists still have fun cushy jobs doing science is absolutely not one of them.
I was at an event on AI for science yesterday, a panel discussion here at NeurIPS. The panelists discussed how they plan to replace humans at all levels in the scientific process. So I stood up and protested that what they are doing is evil. Look around you, I said. The room is
0
0
2
AGI timelines - had dinner with a bunch of oai / anthro / cursor / tesla researchers. a quarter think AGI is already here, 1/4 think it will happen before the end of 2027, 1/4 before 2030, and the rest never. different definitions of AGI but most common is "can replace my job"
16
9
127
Is this just the lizardman effect? Don’t really understand how these numbers could be right. Like going to the APS meeting and 30% of the people didn’t know what General Relativity is. What?
The results are in. Just 69.5% (n=115) people at this neurips knew what AGI stands for. This is only slightly up from last year. See you again next year!
0
0
0
The varied ways of doing nothing.
The idea of "tropical paradise" sort of gives me the willies. Climatically-enforced languor seems kind of sick, bizarre, unpleasant. It's so hot you must do nothing. You must stay by the beach, periodically getting in the water to cool off. If you do anything else, you sweat to
0
0
0
Yes. We have to worry about AI and robotics. Some questions:
277
419
2K
When people say "but it won't wipe out ALL humans... some of us will survive as (owners of capital / pets / etc.), they've already lost the argument.
12
5
69
I’d estimate that my lifetime of time saved by getting to airports with minutes to spare has already been eaten by a 24 hour plus nightmare of connecting flights and layovers required to get somewhere after missing one international flight. I don’t think people are correctly
0
0
0
Reminder that basically everyone agrees that if AGI is coming soon, then AI risk is a huge problem & AI safety a priority. True for AI researchers as well as the general public. Honest to god ASI accelerationists are v rare, & basically the entire fight is on “ASI plausibly soon”
I asked ~20 non AI safety people at NeurIPS for their opinion of the AI safety field. Some people immediately were like "this is really good". But the response I heard the most often was of the form "AGI isn't coming soon, so these safety people are crazy". This was surprising to
6
6
79
A painful claim from Danny Kahneman I can’t stop thinking about. “One implication is obvious. You should replace humans with algorithms whenever possible. Even when the algorithm does not do very well, humans do so poorly and are so noisy that, just by removing the noise, you can
24
28
362
40% of those earning over $300k are living paycheck to paycheck, per Goldman Sachs, $GS.
1K
748
9K