Our major new report "Computing Power and the Governance of Artificial Intelligence" has been released today.
We explain why AI hardware - chips & data centres - may be the most effective targets for risk-reducing AI policies
Having read the paper & supplementary materials, watched narrated game & spoken to one of the human players I'm pretty concerned.
The
@ScienceMagazine
paper centres 'human-AI cooperation' & the bot is not supposed to lie. However, videos clearly show deception/manipulation
1/3
Meta AI presents CICERO — the first AI to achieve human-level performance in Diplomacy, a strategy game which requires building trust, negotiating and cooperating with multiple players.
Learn more about
#CICERObyMetaAI
:
I don't think this is a credible response to this Statement, for the simple reason that the vast majority of signatories (250+ by my count) are university professors, with no incentive to 'distract from business models' or hype up companies' products
You may be wondering: why are some of the very people who develop and deploy artificial intelligence sounding the alarm about it's existential threat? Consider two reasons--
Lol the $44bn Musk just spent on twitter is equivalent to the entirety of ea-aligned funding (according to
@ben_j_todd
). Imagine the carbon removal, vaccine manufacturing, semiconductor fabs, alternative proteins, etc it could have been spent on
OPPENHEIMER is a very, very good film. But two big flaws for me 🧵
1 Presents the Nazis as racing, clearly in the lead, and only losing cos of a technical mistake.
But we know now that Hitler decided against racing in June 1942!
Some people seem surprised by this, but they shouldn't be.
This is the mainstream view of most staff and leadership at all the frontier AI companies - OpenAI, Anthropic, DeepMind, Inflection, Conjecture, much of Microsoft and Google
It's absurd that the reading room of the British museum has been closed to the public for a decade. Now only doing 1 20 minute tour a week at 11.30 on a tuesday
Maybe it suits the museum staff & archivists to have exclusive access but come on!
Rather than share *OMG capability jump* why not share *OMG they invested 8 months on safety research, risk assessment & iteration*?
Well done for exploring these risks, getting ~50 external experts to red-team, automated filters & content moderation etc
GPT-4 is out!
"We spent 6 months making GPT-4 safer and more aligned. GPT-4 is 82% less likely to respond to requests for disallowed content and 40% more likely to produce factual responses than GPT-3.5 on our internal evaluations."
This thread is entirely nonsense and based on no evidence. It's almost a conspiracy theory - and it's deeply ironic that this comes from someone who *works* on misinformation and 'fake news'
I'm often critical of Effective Altruism (EA) and I'm sure I'll get more pushback for this, but I've been thinking a lot lately about the discourse on AI doomerism, extinction risk, etc., and here's my big take on what's going on and why.
Buckle up, friends, it gets spicy.🧵
I can’t figure out why some of these folks were on the board to begin with.
Tasha’s claim to fame seems to be her marriage to Joseph Gordon-Levitt; Helen Toner is a director of grants at Georgetown, where she graduated from in 2021 with a masters in security studies.
The US AISI would be extremely lucky to get Paul Christiano - he's a key figure in the field of AI evaluations & literally the inventor of RLHF.
UK AISI is very lucky to have Dr Christiano on its Advisory Board
I’m going to add some extremely important context this article is missing.
The EO specifically asks NIST (and AISI) to focus on certain tasks (CBRN risks etc). Paul Christiano is extremely qualified for those tasks—important context that should’ve been included here.
Another
I just wrote this for
@voxdotcom
: "What Thiel gets wrong about existential risk"
He argues that 'Luddite' concerns are the main cause of science & tech stagnation and we should be going faster in all areas. I disagree.
1/3
James Manyika
Audrey Tang
Yoshua Bengio
Anca Dragan
Gillian Hadfield
Ian Goodfellow
Jacob Steinhardt
Max Tegmark
Karina Vold
Jess Whittlestone
Michael Osborne
Eva Vivalt
Danit Gal
Tristan Harris
Jose Hernandez-Orallo
Des Browne
Andrew Critch
Seán Ó hÉigeartaigh
The Anh Han
Valér-
How many of the signatories have used their expertise, their influence, their resources to rigourously analyze and expose the risks of current AI systems, and to build solutions that address real harms? That is what we need. Not letters, headlines and inflammatory statements.
Truly an honor to welcome
@harari_yuval
as the first distinguished research fellow
@CSERCambridge
before a packed auditorium. I never thought talking about World War III could be this much fun!
Oppenheimer was a privileged 'Cry Baby Scientist', who built the Bomb based on a mistake (the Nazis didn't race) then lost every political fight to limit the arms race he started.
He wasn't American Prometheus, he was a schmuck.
My latest
@voxdotcom
It's a real honour to receive the 2022 Leonard M. Rieser award alongside
@cpruhl
for our article “Why policy makers should beware claims of new ‘arms races’”.
I deeply respect
@BulletinAtomic
: a champion of a safer & more secure world for 77 years.
@NathanpmYoung
This is straight-up neocolonialism. I don't think it's a good idea whatsoever
The historic examples (East India Companies, United Fruit Company & 'banana republics' etc) were absolutely rife with awful abuses
A few thoughts on global catastrophic risk a few days into Putin's invasion - would be very interested in other's reflections!
-Nukes. Still the most urgent risk. Probably the most dangerous situation since 1983? Crucial to avoid misperceptions/mistakes
1/5
YES! So excited for this crucially important paper.
If its right, then big AI models will cost $100m+ *each* to train by 2030.
Forget about academia, start-ups, open-source collectives or individuals: they can't keep up! For good or ill, Big Tech will be the only game in town
Some nice light reading from me
@voxdotcom
ahead of the Oscars tonight:
Why did Oppenheimer leave me bawling my eyes out?
“It’s all still real. The weapons are still there. Every 12 minutes they could kill everyone we love. We’d all starve to death. 5 billion people could die.”
Curious about all these "warings" and "statements" signed by "AI experts" that AGI threatens humanity with "extinction"? In this rather philosophical article, I explain what they're talking about -- and it's almost certainly not what you think. A short 🧵
The existence of existential risks (like nuclear or biological weapons) fatally undermines the simple "everything is getting better all the time" story Stephen Pinker argued for in 2018's 'Enlightenment Now'.
He's struggled with this basic problem for at least the last 6 years.
Lots of interesting points on AGI risk in
@timnitGebru
's recent talk!
Many people in the field of AGI risk research agree with these points, I certainly do. I would be much keener for AI development to go down a 'comprehensive AI services' route than a 'single agential' route
What a week for AI governance!
US Exec Order + OMB, G7 Hiroshima Principles, Bletchley Declaration (w/ India, China & US!), network of AI Safety Institutes, companies giving access
Incredible 6 month sprint from policymakers, building on over a decade of research and advocacy
AI companies! Its in your direct, narrow, short-term, financial self-interest to invest in safety.
You can’t make money if you’ve had to take your language model down!
My new
@voxdotcom
article:
I'm so excited for this launch! We're a group of Labour members who are sick of the short-term time horizons of politics & will be encouraging the Labour Party to secure a fairer & safer future for all
Its a real honour to be on the Advisory Board: can't wait for the next steps
“Spaceship Britain. The first big society to stop farming since the Neolithic founded modern civilization. The first society to have less than 10% of its workforce in agriculture.” -
@adam_tooze
From this fascinating talk on Wages of Destruction
Very important new UK Government report on Frontier AI Capabilities and Risk
(which I made a small contribution to)
Crucial reading before the AI Safety Summit next week!
NEWS
Bronx medical school, the Albert Einstein College of Medicine, is now tuition-free thanks to a $1 billion gift from Dr. Ruth Gottesman, a former professor.
Gottesman, whose late husband was an early investor with Warren Buffett, has made it a condition of the gift that
I was surprised when I read this to find the article doesn't actually feature any arguments for or against *whether* AI could pose catastrophic risks
Eg no discussion of *whether* it could help make cyber or bioweapons or not, dangers of poor defence/societal integration etc
I'd like the researchers involved to say quite a bit more about "A.3 Manipulation"
What are possible prevention, detection & mitigation steps?
What are the possible use cases? What are the benefits/downsides of them? Has Meta considered developing products based on this?
3/3
I'll be on the
@bbcworldservice
in 15-20 minutes to talk about AI.
eg Vice President Harris' meeting with tech firms, the UK competition review and more (I'd be amazed if Hinton/pause letter doesn't come up!)
Are you young (20s-30s), relatively unattached, decent income? The
#1
thing you can do for the climate is donate to these highly effective climate charities
Are you young (20s-30s), relatively unattached, decent income? The
#1
thing you can do for the climate is move to a walkable neighborhood and drive less
This is the most egregious.
Erases the work done in global health: bednets aren't "too small", EAs helped deliver 70m bednets for kids
When MacArthur pulled out of nukes, Longview stepped in w $10m
EAs have been advocating for AI ethics for a decade!
Reeling from the reputational damage SBF caused to EA, this became somewhat of an existential risk to the EA movement itself: nukes are too obvious, mosquito nets are too small, putting AI x-risk on the map was the path to show the world the enormous value EA offers society.
What this reminds me of most strongly is the classic Reagan-Gorbachev single sentence:
"A nuclear war cannot be won and should never be fought"
There's a clarity and urgency to a single sentence.
We just put out a statement:
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
Signatories include Hinton, Bengio, Altman, Hassabis, Song, etc.
🧵 (1/6)
These remarkable industrial achievements are reduced to marbles in a bowl. Come on, you couldn't have done a quick 30 seconds scene of Oppenheimer talking to Groves in one of these cool locations?!?!
Screenshots of the stab below.
The human player said:
"The bot is supposed to never lie [...] I doubt this was the case here"
"I was definitely caught more off guard as a result of this message; I knew the bot doesn't lie, so I thought the stab wouldn't happen."
2/3
Major new report calling for frontier AI regulation.
Explains the risks & makes concrete recommendations for companies & governments
Well worth a read - it will shape the conversation over the next 6 months. Congratulations to the authors
I like it when people describe themselves as "researchers" or "academics".
The word we *really* want to use is that beautiful, authoritative, powerful word "scientist"
2 The film (like most media) focuses far too much on the scientists at Los Alamos & doesn't show Oak Ridge or Hanford
The enriched uranium & plutonium production plants cost 90% of the Manhattan Project budget and employed >90% employees
I think this is important as people are coming away saying "they were right to race, we couldn't let Hitler have a nuclear monopoly"
But I think one of our key takeaways should be "before you start racing, make sure whether you're actually in a race"
Recently, a spate of articles in prominent media outlets have asserted that concern for existential AI risk is not only wrong or misguided, but disingenuous, deceptive, or manipulative. Just wanted to capture them in one place -
Long termists seem to think human survival more important than climate change/nukes/etc as one day humans may create AGI. But what if those 'non-existential' risks ('only a few billion dead!') leave us stuck in a perpetual dark age? No data centres or research labs in Mad Max.
The modern economy rests on a single road in Spruce Pine, North Carolina. The road runs to the two mines that is the sole supplier of the quartz required to make the crucibles needed to refine silicon wafers.
There are no alternative sources known. From Conway’s Material World:
Denis Villeneuve is set to direct ‘NUCLEAR WAR: A SCENARIO’
The film follows the events that would happen if a nuclear war begun, based on information from interviews with military & civilian experts.
(Source: Deadline)
Are there lab accidents and other anthropogenic disease sources? Yes.
Are some of those events of a severity that could lead to outbreaks? Also yes.
Here's our newly published list of 71 such incidents from 1975-2016 in
@F1000Research
:
Catching up on reading - really positive news coming out of DC!
Federal agencies have completed all of the 150-day + 90 day actions tasked by the Executive Order
Office of Management & Budget issued its first govt-wide policy to mitigate AI risks
White House is going after it!
#DoomsdayClock
is closest its ever been to midnight
People asked me "can that really be true?". I'm a Cambridge Uni existential risk researcher and my view is: unfortunately yes
TL:DR nuclear situation worse than last 30 yrs, climate situation worse ever, and new bio + AI risks
-Norms matter. 'No wars of conquest' does really seem to have held back great powers, and states willing to spend a lot to uphold that norm
-EU a more consequential actor than many may have thought - Russia may be less important going forward if its high-tech industry craters
3/5
i am so sorry to do this, but i can’t keep this quiet. i just ran the calculations and the odds a giant meteor wipes humans out in 10 years is over 1%.
i’m calling for a pause on all domestic production, including AI safety efforts, until we figure out is going on.
i was hoping that the oppenheimer movie would inspire a generation of kids to be physicists but it really missed the mark on that.
let's get that movie made!
(i think the social network managed to do this for startup founders.)
Deeply saddened to hear about the loss of Peter Eckersley.
Kind, brilliant and principled. Sensible when it mattered; eccentric when it didn't.
A huge loss to the field - and an example to try to live up to.
Laughably bad analysis. You can't just cherry-pick 6 examples! You've got to do a comprehensive assessment of funding in the field if you want to argue this
Rather than squabbling over slice size, shouldn't we expand the pie for all AI risk research?
The difference in performance (in tech and outside) between the EU & USA is primarily caused by
- fragmented incomplete single market
- austerity led by Germany's CDU
- less competitive universities
- not deep enough capital markets for scale-up and IPOs
Not 'overregulation'
@geoffreyirving
@yaringal
4/ And we’re not just scaling in the UK. We’re now opening an AISI office in San Francisco to cement our trans-Atlantic partnership with the US and to work with the best and brightest talent on both sides of the Atlantic.
just on the nukes one again - OpenPhil have been basically the only funder *since the 1980s* to commission new research on nuclear winter with modern climate models
gaaaaaaaaarrrggghhh
1/
Facebook's LLaMa language model has proliferated to 4chan, where it will be used to mass-produce disinformation & hate.
It's bizarre that in *2023* Facebook released lllama so irresponsibly - we warned about this 5 years ago!
These three communities & sets of ideas overlap a lot and I think reinforce one another, but they are intellectually & practically separable, and there are people in each section doing great work!
*Existential Risk
*Effective Altruism
*Longtermism/Future Generations rights
1/ The Taskforce is a start-up inside government, delivering on the mission given to us by the Prime Minister: to build an AI research team that can evaluate risks at the frontier of AI. We are now 18 weeks old and this is our second progress report:
This is important as we always remember the few privileged geniuses, not the industry & workers that made it possible.
& again it messes with modern takeaway: When we control proliferation today, we mainly monitor and control *production*. That's what determines eg breakout time
Great to see such a diverse range of countries and civil society and academics will be attending the AI Safety Summit
This is something that academia, civil society and Global South have been calling for
1970s professors: "and a final, brief thanks to my loving wife, who came up with the idea, did the research, wrote and edited the book, secured a publisher, and raised our children."
No part of my worldview is the same after talking with Carl Shulman
Maybe 3 people in the world have thought as rigorously about so many interesting topics
Part 1 is about Carl's model of an intelligence explosion
(We ended up talking for 8 hrs, so I'm splitting this episode
Important article: The single most important data point that suggests "progress is unlikely to slow in the next 2-3y": GPT-4 cost ~$100M (probably less), and Alphabet has 1000x that much money in cash on hand:
Four companies in the whole world have enough compute capacity to do frontier training runs
All four have 'priority access' evals agreements with UK AISI & are regulated by the US Executive Order and the EU AI Act
The job is nowhere near done, but EY's pessimism is unjustified
@JacquesThibs
Then build the regulatory infrastructure for that, and deproliferate the physical hardware to few enough centers that it's physically possible to issue an international stop order, and then talk evals with the international monitoring organization.
Really remarkable report from the French Govt Commission for AI
*€10bn public + private investment in AI, big effort to build new data centres
*World AI Organisation
* €500m International Fund for Public Interest AI
It's got me pumped for their AI Safety Summit!
We just launched our new
@OurWorldInData
page on AI!
We think this technology is extremely important for the future of the world, and our own lives.
In the last months we prepared a range of visualizations and articles to support the public conversation.
AI developers face a crisis of trust. They (mostly) want to act responsibly but their claims aren’t very verifiable, so aren’t trusted. I worked with 58 authors from tech & ML labs to develop 10 concrete mechanisms to move toward trustworthy AI development
I’ve had lots of questions about the AI Safety Summit in November. Today we share more detail on our focus and objectives: it’s a huge opportunity to collaborate with global partners on ensuring we safely reap the benefits of this technology
Keep on coming back to this wonderful blog post - continually mind-blowing
Imagine how much vaster, weirder and more alien some model trained on astronomical and genomic data would be