1/ 18 months into
@pluralplatform
, we’ve raised a new €400M fund. This is an endorsement of the first 26 teams we’ve backed and the level of ambition now emerging from Europe:
1/ Notable how three pioneers of deep learning ( recognised in their shared 2018 Turing award) have substantially diverged on how they assess risk from superintelligence:
1/ I've just left the final session of the first ever global Summit on AI Safety, chaired by
@RishiSunak
and
@michelledonelan
. A thread on how it started vs how it’s going:
I’m honoured to be appointed as the Chair of the UK's AI Foundation Model Taskforce.
A thread on why I'm doing this and how you might be able to help us.
📢Announced today, leading tech entrepreneur Ian Hogarth
@soundboy
has been appointed Chair of the UK's Foundation Model Taskforce.
With an initial £100m of funding, it will unite industry, academia & gov to pioneer the safe development of AI in the UK.
OpenAI describing a CERN-like project that major labs like Anthropic, DeepMind and OpenAI could combine into: "major governments around the world could set up a project that many current efforts become part of"
1/ 11 weeks ago I agreed to Chair the UK's efforts to accelerate state capacity in AI Safety - measuring and mitigating the risks of frontier models so we can safety capture their opportunities. Here is our first progress report:
📢An elite team of high-profile AI specialists have been recruited as advisors for the Frontier AI Taskforce.
The Taskforce will protect against risks, build on UK capabilities & improve public services.
@MichelleDonelan
met Ian Hogarth
@soundboy
, Taskforce Chair, to discuss
Significant new paper with contributors spanning ARC, Anthropic, DeepMind, Cambridge University, OpenAI and more that proposes an approach for evaluating frontier AI models for extreme risks. Non-exhaustive list of these extreme risks below:
Extremely surprised and happy to be awarded a CBE today! It's a testament to the incredible work the team at the AI Safety Institute is doing to tackle risk at the frontier.
1/ The
@stateofaireport
2021 is live!
For the 4th year,
@nathanbenaich
and I compile the most important work in AI research, industry, talent, and politics. Our report is open-access to all. Here are some things that really stood out for me...
One thing I find odd about the 'Google doesn't ship' punditry, is they have shipped and open sourced a remarkable range of AI products that are concretely driving science forward: AlphaFold (open source), GraphCast (open source), GNoME (380k new materials open sourced).
Here it is, the 2019 State of AI Report by
@NathanBenaich
and me. 130 slides covering the most important machine learning research, industry and political developments over the past 12 months. New section on China. Please RT if you find interesting!
1/ STATE OF AI 2020 IS HERE! For the 3rd year running,
@NathanBenaich
and I have tried to compile the most interesting developments in AI. Featuring the biggest research breakthroughs, novel commercial applications and the major political developments.
1/ The Taskforce is a start-up inside government, delivering on the mission given to us by the Prime Minister: to build an AI research team that can evaluate risks at the frontier of AI. We are now 18 weeks old and this is our second progress report:
1/ When I was building
@Songkick
, the most valuable investors I had were people who had built things themselves - Greg McAdoo, Paul Graham and Saul Klein had all been founders and CEOs. They helped me learn faster and were far more committed to helping us succeed
1/ The
@stateofaireport
2022 is live!
For the 5th year,
@nathanbenaich
and I compile the most important work in AI research, industry, talent, and politics. Our report is open-access to all. Here are some things that really stood out for me…
Very proud of the landmark agreement the US and UK have signed today around joint testing of frontier AI systems. Testament to an incredible team of civil servants at the AI Safety Institute:
Notable to see staff from DeepMind, two Turing award winners (equivalent of Nobel for computer science) and other key researchers on this list. Worth engaging with why such informed people are concerned.
Geoff Hinton leaving Google feels like a watershed moment “I don’t think they should scale this up more until they have understood whether they can control it"
4/
@ylecun
argues that a moratorium on larger than GPT-4 training runs would cause more harm than good, and that "the magnitude of the AI alignment problem has been ridiculously overblown & our ability to solve it widely underestimated"
I think that the magnitude of the AI alignment problem has been ridiculously overblown & our ability to solve it widely underestimated.
I've been publicly called stupid before, but never as often as by the "AI is a significant existential risk" crowd.
That's OK, I'm used to it.
This is an absolutely incredible video.
Hinton: “That's an issue, right. We have to think hard about how to control that.“
Reporter: “Can we?“
Hinton: “We don't know. We haven't been there yet. But we can try!“
Reporter: “That seems kind of concerning?“
Hinton: “Uh, yes!“
Seriously inspired by the people that are applying to join the UK's Foundation Model Taskforce from both the AI community and the civil service. I've been doing back to back interviews and it's pretty amazing to see how many experienced researchers want to help.
"Look whats happening with artificial intelligence right now. it poses enormous promise and enormous concern. Our world stands at an inflection point. The choices we make today are literally going to determine the future of this world" - Biden speaking to Irish Parliament today
Someone recently described 'open sourcing' of AI model weights to me as 'irreversible proliferation' and it's stuck with me as an important framing. Proliferation of capabilities can be very positive - democratises access etc - but also - significantly harder to reverse.
1/ I’m very excited to announce that
@pluralplatform
and UVC Partners are going to be co-leading a €7m seed round for Proxima Fusion, the first ever spin out from the Max Planck Institute of Plasma Physics:
The AI Safety Institute is hiring for our technical team! We have the resources of government, and move quickly like a start-up. Please help me spread the word. For every 10 likes/RTs I'll give you 1 opinionated take on AGI safety/governance in 2024 below
1/ The AI Safety Institute has been in operation for almost eight months and I'm excited to announce some huge new hires. We have begun pre-deployment testing for potentially harmful capabilities on advanced AI systems. This is our third progress report:
I propose the Yann-Jaan continuum of existential risk from a misaligned AGI. At one end you have
@ylecun
who argues we are a long way off AGI and that alignment will be easy. At the other end you have Jaan Tallinn, signatory to the moratorium letter.
EXCITING! "Drive dramatic cost reductions in critical clean energy technologies, including battery storage, negative emissions technologies, the next generation of building materials, renewable hydrogen, and advanced nuclear"
"It wasn’t until the lockdown ended and Cala customers could visit the restaurant in person that the reason became clear. Instead of a team of chefs, the food is cooked and assembled by a robot"
First concrete outcome from the AI Safety Summit: six of the leading frontier AI companies publish their AI safety policies, explaining how they plan to operationalise their commitments to build safe AI. Read them here:
2/ Yoshua Bengio was one of the leading signatories to the open letter calling on "all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4"
Delighted to speak today at
@InstituteGC
about the mission we're on at the Foundation Model Taskforce. It's been amazing to see so many talented researchers from all over the world get in touch with offers of help.
“In a major diplomatic coup for the British hosts, U.S. Commerce Secretary Gina Raimondo took the stage on Wednesday morning alongside Wu Zhaohui, China's vice minister of science, at the summit at Bletchley Park”
5/ And the way of building on that consensus is by solidifying the evidence base. Which is why I am so excited that Yoshua Bengio will now chair an international ‘State of the Science’ report:
If you work on Meta's protein-folding team please consider applying to the Foundation Model Taskforce. We have big plans for AI & scientific research. Apply here:
Two nice milestones at
@join_ef
this week:
- We just returned 17x net cash-on-cash to investors in our first ever cohort back in 2013
- We just passed 1x DPI on our first “proper” fund from 2015 (with another >10x of TVPI to go 🤞)
(1/3)
17/ I believe the LiveNation / TicketMaster merger of 2010 was fundamentally bad for innovation in the concert industry. It allowed the largest concert promoter to combine with the largest primary ticket company.
6/ Breakthrough 2: before, only AI companies could test for safety. Now, the leading companies agreed to work with Govts to conduct pre- and post-deployment testing of their next generation of models. This is huge:
Striking how behind the pace so many legendary investors are on AI - they mostly all missed investing in the iconic companies (DeepMind - missed, OpenAI - missed, Anthropic - missed, HuggingFace - missed) and are now barking out MBA platitudes hungry to stay relevant.
Spoke to a "venture group" in Nashville recently. I asked them what their ownership targets were and they told me 40% at seed.
This is why the Tier 3 cities for building startups will never be taken seriously
Something very special is happening at Proxima Fusion. Scientists and engineers from TUM, EPFL, Stanford, MIT, Lilium, Tesla, and Google relocating to Munich to build the world’s first commercial fusion power plant leveraging a quasi-isodynamic stellarator
4/ In the end, the case was settled out of court, 2 weeks before trial for $130m. . TicketMaster was required to pay a $10m criminal fine for intrusions into Songkick’s computer systems.
15/ As part of the settlement, the IP around this technology was acquired by TicketMaster. It endures as a ‘TicketMaster verified fan’ programme, but it feels like we would have a healthier concert industry if Songkick had been able to compete and scale this up independently.
“Ticketmaster has agreed to pay a $10 million fine to resolve charges that it intruded into the computer system of one of its competitors... the company illegally interfered in the business of a ticketing start-up called Songkick.”
10/ And to that end I put out a call to people across the world. If you are an AI specialist or safety researcher who wants to build out state capacity in AI safety and help shape the future of AI policy then get in touch:
It was great to spend time with
@ylecun
yesterday - we agreed on many things - including the need to put AI risks on a more empirical and rigorous basis.
The field of AI safety is in dire need of reliable data .
The UK AI Safety Institute is poised to conduct studies that will hopefully bring hard data to a field that is currently rife with wild speculations and methodologically dubious studies.
3/ Breakthrough 1: it used to be controversial to say that AI capability could be outstripping AI safety. Now, 28 countries and the EU have agreed that AI “poses significant risks” and signed The Bletchley Declaration:
Watching VCs FOMO driven exuberance in '21 then paralysis from start of '22 bear market and now hard pivot into making "AI Market Maps for Generative AI" in '23 really captures the herd like mentality of most VCs.
9/ Songkick’s technology reduced the number of tickets available to scalpers - only 2% made it into the hands of scalpers compared to e.g 20% for other comparable tours.
"A man I’ve never met, an old buddy of my dad’s from school in Kenya, shook my hand and held me by the shoulder, looked intently into my eyes and said into my ear: “The 2 most important things in life are your body and your family”, and he walked off"
A sign of how seriously the UK takes its role in hosting the first global AI Safety Summit this autumn:
@SciTechgovuk
has appointed two "Sherpas" - Jonathan Black (formerly the UK's Sherpa to the G7 and G20) and
@matthewclifford
(deep relationships across the AI community).
It’s an honour to be appointed the Prime Minister’s Representative (“Sherpa”) for the AI Safety Summit. With
@JonathanBlackUK
, I’ll be spearheading the UK’s preparations for this crucial event (1/3)
I spoke with
@CristinaCriddle
and
@madhumita29
about how AI could potentially amplify national security challenges and the work we are doing at the Taskforce to evaluate this: . Note to readers, I have never, and will never refer to myself as an 'AI tsar'!
4/ We’ve built a truly global consensus. It is a massive lift to have brought the US, EU and China along with a huge breadth of countries, under the UK’s leadership to agree that the risks from AI must be tackled.
7/ The UK’s new AI Safety Institute is the world’s first government capability for running these safety tests. We will evaluate the next generation of models. You can read our plans for the AISI and its research here:
4/ Last year
@KHelioui
,
@seikatsu
,
@Taavet
and I spent time working through how we could build a better product for European founders. We’re today launching
@PluralPlatform
with a €250M fund. It’s people who have built companies investing in the future iconic companies of Europe
Back in the calmer days of 2021, Nathan and I predicted in our annual state of ai report that ASML's market cap would top $500b. (It didn't). But, even in 2024 it still hasn't and hovers around $400b. Why is that when it has a similar monopolistic position to Nvidia?
6/ Very few attempts to build a new and scalable product in venture have succeeded but we think it’s worth trying because if it works we could have GDP level impact on Europe and leave the European start-up ecosystem healthier than when we joined it
.
@davidad
is a genuine original. I'm super excited he is going to be leading such an original and ambitious effort towards provably safe AGI. go 🇬🇧 research!
I’m excited to share that I’m joining
@ARIA_research
, a new 🇬🇧 government funding agency, as a Programme Director.
I’m developing the structure to fund a concerted R&D effort towards accelerating mathematical modelling of real-world phenomena using AI, at a scale of O(10⁷) £/a.
A journalist at The Economist asked me the other day what the single biggest thing the EU could do to take a leadership position in AI. My answer: do whatever it takes to stop the UK leaving the EU and then build extensively on top of the UK's world class AI assets and heritage.
Fascinating to watch the reaction to Gemini on twitter - massive focus on 'wokeness', while a small set of people seem to be probing the breakthroughs in coding ability from longer context window:
🤯 Mind officially blown:
I recorded a screen capture of a task (looking for an apartment on Zillow). Gemini was able to generate Selenium code to replicate that task, and described everything I did step-by-step.
It even caught that my threshold was set to $3K, even though I…
Great to see
@yudapearl
(Turing award winner & pioneer of causal inference) /
@ylecun
(Chief AI Scientist at Meta) /
@erikbryn
(leading economist with deep expertise in AI) debating these core questions of AI Safety/Alignment in public.
Not convincing. All it takes is for one variant of AGI to experience an environment where dominance has survival value and, oops, e-Sapiens will irradicate e-Neandertals and pass on the gene to their descendants.
8/ Breakthrough 3: we can’t do this alone. I’m so excited the US is launching a sister organisation which will work in lockstep with our effort and that we have agreed a partnership with Singapore. This is the start of a global effort to evaluate AI
5/ And at a pivotal moment,
@RishiSunak
has stepped up and is playing a global leadership role. He has pledged £100m on AI safety, the largest amount ever committed to this field by a nation state.
9/ Breakthrough 4: this first summit is a huge step. But it is only the first step. So it is crucial to have locked in the next 2 summits, which will be hosted by South Korea + France. The UK has created a new international process to make the world safer.
"The quantity of chips used to train a model is increasing by 2x-5x/year. Speed of chips is increasing by 2x every 1-2 years. And algorithmic efficiency is increasing by roughly 2x/year. These compound with each other" - Dario Amodei
Great to see this leadership from the US in
developing new voluntary safeguards for AI.
The UK will continue working with the US and our international partners in the run up to the first ever global summit on AI safety in the UK.
1/ Last year I co-lead the first funding round for Proxima Fusion. I wrote here about how Germany has quietly become the global leader in a category of fusion - stellarators . Great to see
@johnthornhillft
highlighting the opportunity for Germany & Europe.
2/ How it started: we had 4 goals on safety, 1) build a global consensus on risk, 2) open up models to government testing, 3) partner with other governments in this testing, 4) line up the next summit to go further. How it’s going: 4 wins:
Horrific. "Miller saw the separation of families not as an unfortunate byproduct but as a tool to deter more immigration. According to three former officials, he had devised plans that would have separated even more children."
What are the best public proposals from the AI community for regulating general purpose AI systems (e.g. systems like OpenAI's GPT-4, DeepMind's Gato, Meta's LLaMA)? Please share and I will link to them in the thread below.
11/ We have £100m to spend on AI safety and the first global conference to prepare for. I want to hear from you and how you think you can help. The time is now and we need more people to step up and help.
designing new plants; neuromodulation; transformational optics; provably trustworthy AI; climate intervention technologies; nanobots - what an exciting set of programme areas at ARIA!
5/ Songkick was an innovator. 10 million+ fans visited Songkick each month to discover concerts and get personalised concert listings based on the music they listened to.