
Chris Scammell
@chrisconjecture
Followers
189
Following
514
Media
24
Statuses
168
COO at Conjecture. Working to align AGI (and myself) with the Good.
London
Joined November 2022
The Compendium - A guide to extinction risks from AGI.
thecompendium.ai
The Compendium explains extinction risks from AI, where they come from, and how we can fix them.
0
0
6
What a legend. Thanks to Pope Leo for following up Antiqua et Nova with serious concern. First tweet thread as pope, and he directly addresses US-China tensions and offers to be a peace broker. Eyes on the ball for sure.
War is never inevitable. Weapons can and must fall silent, for they never solve problems but only intensify them. Those who sow peace will endure throughout history, not those who reap victims. Others are not enemies to hate but human beings with whom to speak.
0
0
2
RT @DKokotajlo: I have long said that Dwarkesh runs the best podcast because he is willing to go deep. We spent many hours arguing about ta….
0
23
0
RT @DKokotajlo: "How, exactly, could AI take over by 2027?". Introducing AI 2027: a deeply-researched scenario forecast I wrote alongside @….
0
994
0
Hugely support the Direct Institional Plan and ControlAI's work. They've taken the hardest problem that exists and carved out a sensible, clear path towards policy progress. Check it out here!
controlai.com
The DIP offers a clear path to solving the problem of superintelligence, one that follows the way civilizational problems are best solved: awareness, civic engagement, and applying to AI the standa...
Superintelligence threatens us all. But we can turn the tide. Directly engaging institutions is the obvious, straightforward path. We've done it, now it's time to scale. We're releasing the Direct Institutional Plan (DIP) so everyone can help keep humanity in control.
0
0
16
Getting around to reading this now; it's really great. Thanks Anthony for writing this and engaging with the harder questions. It's not a matter of "how to create AGI" -- it's a matter of "how NOT to create AGI." . Whether or not AGI gets built is not a static question of "is
New piece Keep The Future Human is out today. (🧵, link at end). Humanity has got about a year or two left to decide whether we're going to replace ourselves with machines – starting individually, then as a species.
1
2
10
RT @ryan_kidd44: LISA (@LondonSafeAI) is hiring a CEO! The LISA office is home to @apolloaisafety, @BlueDotImpact, @MATSprogram extension a….
london-safe-ai.notion.site
💁 Overview
0
3
0
RT @repligate: Bullshit. The reason is not boring or complicated or technical (requiring domain knowledge). Normies are able to understand….
0
30
0
Nice thread, thanks Emmett. It's been painful for me to go through the upskilling arc at Conjecture to realize that I'm competent enough to do things in the world. Simply realising that the system can change through your own actions confers a huge amount of responsibility and.
Rachel Lark is singer-songwriter who was raised by two philosophers and writes emotionally powerful music that deals with themes of purpose, desire, politics, and meaning. I find her music incredibly compelling and also almost nauseating at the same time and I want to say why.
0
0
2
RT @billyperrigo: Excl: New poll shows the British public wants much tougher AI rules:. ➡️87% want to block release of new AIs until develo….
time.com
A new poll shows the British public wants far stricter AI rules than its government does.
0
71
0
RT @ai_ctrl: UK POLITICIANS DEMAND REGULATION OF POWERFUL AI. TODAY: Politicians across the UK political spectrum back our campaign for bin….
0
58
0
The public doesn't want the current AI race. "The new poll shows that 87% of Brits would back a law requiring AI developers to prove their systems are safe before release, with 60% in favor of outlawing the development of 'smarter-than-human' AI models.".
time.com
A new poll shows the British public wants far stricter AI rules than its government does.
1
0
4
To spell this out a little more seriously to the AI safety community pushing for evaluations and red-lines:. Voluntary commitments are often used as tactical smokescreen. They give the appearance of a boundary to appease public pressure, and do nothing to the stop speed.
0
5
23
RT @NPCollapse: Agreed. For me, "point of no return" doesn't necessarily mean instantaneous human extinction, but "the point after which….
0
8
0