
Gary Marcus
@GaryMarcus
Followers
195K
Following
83K
Media
3K
Statuses
54K
“In the aftermath of GPT-5’s launch … the views of critics like Marcus seem increasingly moderate.” —@newyorker
Joined December 2010
Three thoughts on what really matters: 1. Fuck cancer 2. Friends are irreplaceable 3. The new "Marcus test" for AI is when AI makes a significant dent on cancer May that happen sooner, much sooner, rather than later. In memory of my childhood friend Paul.
94
71
2K
Scharfe Kritik an KI von Wolfram Weimar: „Auf gleichsam vampiristische Weise saugen KI-Unternehmen derzeit das kreative Potenzial aus unzähligen klugen Köpfen, nutzen deren Ideen und Empfindungen, ihre Schaffenskraft, ihre Vision. Damit wird die große kulturelle Errungenschaft
4
11
34
AI academics are falsely dichotomizing in droves tonight. how about a little nuance folks? leave the false dichotomies to politicians?
There are basically only two positions in the debate about AI. 1. I’ve barely invested any time in learning how to use it effectively. AI sucks. 2. I’ve invested in learning how to use this tool. Holy cow, it’s transformational. Position #1 has lower barriers to entry.
14
6
60
As someone who was part of the first wave of generative AI criticism (back to 2019) I resent this. The message was never “it doesn’t work”; it was always it doesn’t work reliably and can’t be trusted on its own. Literally hundreds of times I said it useful for some things. All
I hope as we move past the first wave of AI criticism ("it doesn't work, all hype") we get a new wave of AI criticism rooted in the fact that these systems are very powerful & quite useful and focusing a deep exploration of when AI uses are uplifting and when they are detrimental
28
13
162
Says the guy wearing a suit made by a tailor, taking testosterone synthesized by scientists, who drove to work in a car built by engineers, after flying in a plane controlled by a pilot, in a home adjusted by an electrician, with pipes installed by a plumber…
RFK Jr: We need to stop trusting the experts... Trusting the experts is not a feature of science or democracy, it's a feature of religion and totalitarianism.
123
247
1K
GPU depreciation is gonna be a bitch in fact, it already is 🫧
23
31
234
On science, AI influencers, and intellectual honesty. Scientist turned “AI influencer” @DeryaTR_, who has taken at least some funding from OpenAI, has repeatedly gone after me, often with intellectual dishonesty. It’s a good case study. Derya knows perfectly well, as a
15
8
61
NYT on whether AGI is the right goal for right now: https://t.co/ES46lyn24h [gift link] AGI definitions: https://t.co/InpXnzfnNr Substack with some caveats on AGI Definitions:
nytimes.com
Generative A.I. can do many things human beings can do. But that misses the point about how A.I. can truly benefit us.
1
4
21
Should we continue to pursue AGI, right now? And what the heck is AGI, anyway? I have three new articles today on these questions. The first (on whether chatbots are a waste of AI’s potential) is the New York Times. The second is on AGI definitions, with @DanHendrycks,
23
19
86
Sorry to say, @sama, but you have become your own punchline. September: There aren’t enough chips; we are bottlenecked. We gonna have to make tough choices, cancer vs education. October: We make porn and slop, because that’s where the money is.
43
76
501
agreed. thank you, @demishassabis, for keeping AI’s spotlight on science rather than porn and slop.
@GaryMarcus @sundarpichai I'm at least thankful that Google is going the proper AI route we're rooting for, rather then the degeneracy developed by other tech companies.
13
10
343
Kudos to @SundarPichai for stating this exciting new finding exactly right: the system generated a novel hypothesis, validated in living cells - but not yet living organisms. I saw a lot of confusion on this yesterday. Read what he said carefully to understand what has and has
An exciting milestone for AI in science: Our C2S-Scale 27B foundation model, built with @Yale and based on Gemma, generated a novel hypothesis about cancer cellular behavior, which scientists experimentally validated in living cells. With more preclinical and clinical tests,
17
13
237
@GaryMarcus Gary’s right — “distribution shift” is one of AI’s biggest unsolved problems. In simple terms, it means this: when an AI system is trained on one type of data but then encounters something even slightly different in the real world, its performance often collapses. It’s like
10
9
32
ok, time for a new bet: I bet that GPT-5 can’t write a romance novel (without extensive plagiarism) that some reasonable panel of judges finds readable enough to make it through to the end.
@GaryMarcus I think 'AGI porn' could be revolutionary to at least the global digital adult content market (~$100 billion, not sure how much of that is written works) I could imagine AI one shotting an erotic novel for a persons sexual interests. Maybe it gets teenagers reading again??
15
4
83
new theory: what Ilya saw was that … AGI porn was not in fact going to be all that revolutionary
22
8
99
Golden words.
@vingthor84 projecting that the enterprise won’t be profitable is not same as shorting the market; we all know the market can remain irrational longer than individual investors remain solvent.
0
1
8
Wild that many people still can’t see that techniques for augmenting math won’t necessarily generalize well to the open-ended real-world — even after years of stupendously great system like DeepBlue, Watson and AlphaGo having relatively little lasting impact.
3 years ago: "it can't even multiply 2-digit numbers!” 1 year ago: "okay but it'll never win the math olympiad!" today: "okay but it can't solve Riemann Hypothesis!" its wild that people can't see that this eventually leads to the end of man tbh
12
9
83