Known as Mad Max for my unorthodox ideas and passion for adventure, my scientific interests range from artificial intelligence to the ultimate nature of reality
Here's why I think there's now a one-in-six chance of an imminent global
#NuclearWar
, and why I appreciate
@elonmusk
and others urging de-escalation, which is IMHO in the national security interest of all nations:
Help us find an unsung hero! If they win, they get $50k & you get up to $3k for nominating/spreading the word. Our first 3 awards went to Vasili
#Arkhipov
, Stanislav
#Petrov
& Matthew
#Meselson
, for helping prevent 2
#nuclear
wars & 1 bioweapon arms race.
We just made the most scientifically complete
#NuclearWar
simulation to date, from fires to famine. As you can see, about 99% die in the US, Europe, Russia & China. Winner: nobody.
My beloved dad died peacefully this morning, after 92 inspiring orbits around the sun, retaining his dark humor and epic stoicism until the very end. I feel so grateful for all his love, wisdom and encouragement, and for getting to do 53 orbits together. ❤️
Do language models have an internal world model? A sense of time? At multiple spatiotemporal scales?
In a new paper with
@tegmark
we provide evidence that they do by finding a literal map of the world inside the activations of Llama-2!
What nuclear war looks like from space based on data from peer-reviewed science papers. A Nature Food paper today suggests that over 98% would starve to death in the US, Europe, China & Russia.
Dear everyone who wants to regulate and slow down AI: please stop fighting over who has the Most Correct Reason for the slow down. Just work together and make it happen! Reasons in alphabetical order:
* autonomous weapons
* bias
* biosecurity
* children's safety
(more...)
In an international climate treaty hopeless? No! 😃 That's how we mended the
#OzoneHole
, thanks to the heroes in this video, so we can succeed again with our future climate!
#FutureOfLifeAward
#Optimism
It's disgraceful that US & EU policymakers prefer getting advice on how to regulate AI from AI company leaders instead of academic experts like Bengio, Hinton & Russell who lack conflicts of interest:
Let's not just focus on whether
#GPT4
will do more harm or good on the job market, but also on whether its coding skills will hasten the arrival of
#superintelligence
– for which AI safety researchers have so far failed to discover any safety guarantees:
Please welcome our first class of
@VitalikButerin
Fellows in AI Existential Safety, and consider collaborating with them to ensure that AI become the best rather than worst thing to happen to humanity!
I'll miss you,
@StephenHawking8
! I've lost a long-time collaborator and, above all, a great inspiration, always reminding me of how seemingly insurmountable challenges can be overcome with creativity, willpower and positive attitude:
I'm excited that our new paper on machine learning for physics was just published in Physical Review Letters. We show how to auto-discover conservation laws from observed data alone using a cool technique for measuring the dimensionality of a data set.
I feel honored and excited to be back to
@LexFridman
's podcast; he's really deep, and has a unique knack for getting into the most important and exciting questions about tech, life & reality.
Here's my conversation with Max Tegmark (
@tegmark
). Our first chat was episode
#1
of this podcast. Now he's back! We talk about the intersection of machine learning and physics, and also about how to avoid near-term and long-term existential threats of AI.
With our new
#AI
algorithm, we finally managed to discover new conservation laws that domain experts didn't know about, in both fluid mechanics and atmospheric chemistry:
I'm excited about our method for machine-learning new physics. It auto-discovers e.g. Neptune & gravitational radiation by detecting energy conservation violation even when the physical laws are unknown:
We just produced this sequel to our Slaughterbots movie, since they're now here for real and the UN is debating whether to
#BanSlaughterbots
. What do *you* prefer?
Let's make
#AI
like biotech, where companies must demonstrate safety, rather than the civilian nuclear industry, where poor safety standards gave us Three Mile Island, Chernobyl, Fukushima and a backlash that crushed the industry:
After 9 years, without giving a reason, Facebook just canceled my account with tens of thousands of followers and countless conversations. Are you next?
@SamHarrisOrg
@elonmusk
Sam, are you conflating de-escalation with capitulation? Do you agree that taunting Putin with tweets of Marilyn Monroe singing happy birthday by his burning bridge is goading him to escalate, which isn't in our interest and should be criticized?
We’re making big bets- and we are going to deliver on those bets. First up: we’re going to field attritable autonomous systems at a scale of multiple thousands, in multiple domains, within the next 18-to-24 months. Together, we can do this.
@NDIAToday
#EmergingTechETI
I’ve been shocked to discover exactly this over the years through personal conversations. It helps explain why some AI researchers aren’t more bothered by human extinction risk: It’s *not* that they find it unlikely, but that they welcome it!
There are a significant number of people in the AI research community who explicitly think humans should be replaced by AI as the natural next step in evolution, and the sooner the better!
If you're still not concerned about humanity losing control of recursively self-improving AI, see if
#DeepLearning
Godfather Geoffrey Hinton can persuade you:
I like you, Yann, and as a colleague I'd like to encourage you to apologize to
@dan_hendricks
for attacking him for his religious upbringing. You really debase yourself by stooping so low as to make lame ad hominem attacks. FYI, I'm not from an ultra-religious family, but as you…
As I have pointed out before, AI doomerism is a kind of apocalyptic cult.
Why would its most vocal advocates come from ultra-religious families (that they broke away from because of science)?
We build a lie detector for large language models and show that they represent uncontroversially true and false sentences on opposite sides in a linear space. This is IMHO further evidence they they’re not just overhyped “stochastic parrots”:
Do language models know whether statements are true/false? And if so, what's the best way to "read an LLM's mind"?
In a new paper with
@tegmark
, we explore how LLMs represent truth. 1/N
Only 4% of Americans strongly disagree with the proposed pause on AI more powerful than
#GPT4
, so loud pause critics linked to big tech aren't representative. Upton Sinclair said "It is difficult to get a man to understand something, when his salary depends on his not…
IMHO,
@SnoopDogg
now captures the magnitude of what's happening in
#AI
better than most tech pundits with their financial conflicts of interest – and most policymakers and corporate lobbyists...
I’m struck by how out-of-touch many of my tech colleagues are in their rich nerd echo chamber, unaware that most people are against making humans economically obsolete with AI:
This is surreal to watch: SXSW audiences were booing and screaming at pro-AI videos
SXSW is (partially) a tech conference!
They lose their shit when it says “AI makes us more human”
I repeat: in 1-5 years, if we're still alive, I expect the biggest protests humanity has ever…
The main risk with advanced
#AI
isn't malice, but competence: that it accomplishes goals that aren't aligned with ours. Here are some hilarious and cautionary examples of how this can happen accidentally:
As we celebrate the start of our new annual orbit around the Sun, it’s fun to remember that our a Sun in turn orbits our Galaxy every 300 million years - we’ve done about 15 Galactic laps so far 🎉🥳
Yann, I'd love to hear you make arguments rather than acronyms. Thanks to
@RishiSunak
&
@vonderleyen
for realizing that AI xrisk arguments from Turing, Hinton, Bengio, Russell, Altman, Hassabis & Amodei can't be refuted with snark and corporate lobbying alone.
The UK Prime Minister has caught the Existential Fatalistic Risk from AI Delusion disease (EFRAID).
Let's hope he doesn't give it to other heads of state before they get the vaccine.
"An AI safety summit at Bletchley Park in November is expected to focus almost entirely on…
"Don't regulate AI – just trust the companies!" Does he also support abolishing the FDA and letting biotech companies sell whatever meds they want without FDA approval, because biotech is too complicated for policymakers to understand?
WATCH: Former Google CEO
@ericschmidt
tells
#MTP
Reports the companies developing AI should be the ones to establish industry guardrails — not policy makers.
“There’s no way a non-industry person can understand what’s possible.”
I'e very excited about our AI latest paper, which shows that an architecture radically different from standard neural network achieves much better accuracy with fewer parameters for interesting physics and math problems. arXiv:2402.05110
MLPs are so foundational, but are there alternatives? MLPs place activation functions on neurons, but can we instead place (learnable) activation functions on weights? Yes, we KAN! We propose Kolmogorov-Arnold Networks (KAN), which are more accurate and interpretable than MLPs.🧵
Huge news: A who's who of AI leaders agree that "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."
Got data and wonder if there's a formula describing it? Try our new physics-inspired AI Feynman algorithm, published today. It automates what took Kepler 4 years:
If you agree with Yoshua Bengio, Stuart Russell, Yuval Noah Harari and others on pausing training of risky blackbox AI systems more powerful than
#GPT4
, please join me signing this open letter:
@elonmusk
Your hypocrisy is hilarious,
@ylecun
, portraying yourself and Meta as fighting Big Tech lobbyists - by agreeing with them - and being the biggest of all! Sounce: European Observatory a report
The true AI “doomers” are the e/acc folks who are so misanthropic that they’re fine with AI replacing our entire species. Who here on X prefers AI that complements rather than replaces us?
@benkohlmann
@SecRaimondo
e/acc "founders" (if this term can be used for a meme ideology) openly stated (they've toned it down for PR reasons, not a change of heart) that AI annihilating and replacing all of humanity would be fine, if not outright desirable:
CNN: “It’s time to call Putin’s bluff”
Putin: “I’m not bluffing”
It seems to me that Western media is dangerously biased toward downplaying the nuclear war threat and against deescalation.
Mitigating the risk of extinction from AI should be a global priority.
And Europe should lead the way, building a new global AI framework built on three pillars: guardrails, governance and guiding innovation ↓
I hope I'm wrong! Let's ban
#NuclearWar
decision-makers from bringing loved ones to their government-provided bunkers, to give them more skin in the game.
I'm so tired of all these dumb decel doomers who freak out over
#ChatGPT
,
#Gemini
,
#Claude
&
#Grok
and claim that machines can one day get larger than people – it's obviously impossible:
LOL - it's hilarious that nations can be so confident that their lethal autonomous weapons won't malfunction when they can't even get their videoconferencing to work reliably.
It's jarring watching the UN meeting on
#KillerRobots
this week. They can't get the remote participants' videoconferencing working, while some delegates argue with a straight face that technology will lead to better civilian protection because 'databases and machine learning'.
The best video I've seen about how
#AGI
can kill democracy. If you're skeptical of existential threat talk, rest assured that this film does *not* focus on it.
Today we’re releasing “The A.I. Dilemma” – a new talk
@aza
and I gave on 3/9, a week before GPT4 launched.
*Pls share it widely.* It's critical for institutions to understand how the race between AI labs is accelerating the likelihood of catastrophe:
My new TED talk argues that we should (and *can*) keep increasingly powerful AI under human control – if we do our homework and don't get carried away by hubris!
I’ve only done a small amount of neuroscience research, but enough to appreciate how amazing it is: Nolan is controlling his cursor with thoughts detected by a Bluetooth device in his head.
After pushing for an international AI safety summit for 9 years, I was really moved to get to be part of it finally happening!
@RishiSunak
had more success than I'd expected, with striking unity between 1) US & China, 2) those focused on current harms vs existential risk & 3)…
77 years ago today, the US nuked Nagasaki; the nuke below is 2500x stronger. I've never before seen so little MSM coverage of this grim anniversary, even though we now IMHO face the greatest risk of nuclear war since 1962. Why the eerie media silence?
Thanks
#VasiliArkhipov
for ensuring that today isn't the anniversary of
#WorldWar3
! IMHO, this was the single most valuable act in recorded human history.
We introduce a system for fine-grained robotic manipulation! 🤖
What’s new?
* We can control cheap robots to do surprisingly dexterous tasks
* New technique that allows robots to learn fine motor skills
A short thread 🧵
75 years ago the world saw the catastrophic humanitarian consequences of nuclear weapons. It must not happen again. Only nuclear zero is worthy of the victims of Hiroshima & Nagasaki. PM Ardern urges all States to join the Treaty on the Prohibition of Nuclear Weapons
#nuclearban
Thanks
@ylecun
for the long and thoughtful reply! You're conflating two separate questions:
1) Is superintelligence an xrisk?
2) Is open-sourcing good?
You criticized
@RishiSunak
over 1). I replied, but you still gave no arguments for why it's not an xrisk, just repeated the…
@tegmark
@RishiSunak
@vonderleyen
Altman, Hassabis, and Amodei are the ones doing massive corporate lobbying at the moment.
They are the ones who are attempting to perform a regulatory capture of the AI industry.
You, Geoff, and Yoshua are giving ammunition to those who are lobbying for a ban on open AI R&D.
If…
In contrast to
#AlphaGo
, the shocking
#AI
news here isn't the ease which with which
#AlphaZero
crushed human *players*, but the ease which which it crushed human *AI researchers*, who'd spent decades hand-crafting ever better chess software:
We made this
#AI
video explaining the 3 main narratives to help you make up your own mind: (1) it's overhyped, (2) it's a big deal but will be fine, (3) risks are real.
For those of you who don’t work in AI: Yoshua is widely respected as one of the most brilliant and influential AI researchers, and runs the largest non-corporate AI research center in the world. So worth listening to.
Yoshua Bengio:
'For most of these years, I did not think about the dual-use nature of science because our research results seemed so far from human capabilities and the work was only academic. It was a pure pursuit of knowledge, beautiful, but mostly detached from society until…
The
#coronavirus
reminds us that we must make humanity more resilient. In an epic new book launched today, Toby Ord's “The Precipice separates science from hype and will remain the definitive work on existential risk for a long time to come.”
My own university contributed to killer robot proliferation by training the founder of this Turkish company. Please encourage
@MIT
to join
@UCL
&
@DeepMind
by signing the Autonomous Weapons Pledge:
Yay – our new $20M MIT-lead AI-physics center got funded! We'll tackle two of the greatest mysteries of science: how our universe works and how intelligence works. Our key strategy is to link them, using physics to improve AI and AI to improve physics.