I think employees of frontier AI companies should be more able to raise AI risk concerns and advocate for the reduction of these risks without fearing their employers.
Over 100 current and former employees of frontier AI companies have written to Gavin Newsom, urging him to sign California's AI bill into law, which would make AI companies liable for causing a catastrophe.
Most notably, three of these top AI companies oppose the bill: OpenAI,
Somewhat interesting advertising choice from Anthropic, comparing their newly released Claude 3 to GPT-4 on release (March 2023).
According to Promptbase's benchmarking, GPT-4-turbo scores better than Claude 3 on every benchmark where we can make a direct comparison.
Today, we're announcing Claude 3, our next generation of AI models.
The three state-of-the-art models—Claude 3 Opus, Claude 3 Sonnet, and Claude 3 Haiku—set new industry benchmarks across reasoning, math, coding, multilingual understanding, and vision.
AI will most likely lead to the end of the world, but in the meantime, we'll get to watch great movies.
Impressive technology, but spare a thought for those young people currently entering the film industry, who certainly will have recognized that OpenAI has stolen their future.
here is sora, our video generation model:
today we are starting red-teaming and offering access to a limited number of creators.
@_tim_brooks
@billpeeb
@model_mechanic
are really incredible; amazing work by them and the team.
remarkable moment.
roon, who works at OpenAI, telling us all that OpenAI have basically no control over the speed of development of this technology their company is leading the creation of.
It's time for governments to step in.
1/🧵
things are accelerating. pretty much nothing needs to change course to achieve agi imo. worrying about timelines is idle anxiety, outside your control. you should be anxious about stupid mortal things instead. do your parents hate you? does your wife love you?
12 Questions for Sam Altman:
1. Why did you argue that building AGI fast is safer because it will take off slowly since there's still not too much compute around (the overhang argument), but then ask for $7T for compute?
2. Why didn't you tell congress your worst fear?
3. Why
OpenAI are losing their best and most safety-focused talent.
Daniel Kokotajlo of their Governance team quits "due to losing confidence that it would behave responsibly around the time of AGI"
Last year he wrote he thought there was a 70% chance of an AI existential catastrophe.
Another two safety researchers leave: Ilya Sutskever (co-founder & Chief Scientist) and Jan Leike have quit OpenAI.
They co-led the Superalignment team, which was set up to try to ensure that AI systems much smarter than us could be controlled.
Not exactly confidence-building.
Another safety researcher has left OpenAI.
BI reporting that William Saunders, who worked on Superalignment with Leopold Aschenbrenner (allegedly fired for leaks in April) and Ilya Sutskever (currently MIA) quit the company in February, announcing it on Saturday on LessWrong.
"Open source AI" is a total scam:
With open source software one releases the necessary information (source code) in order to reproduce the program. This also allows one to inspect and modify the software.
"Open source" AI is more akin to simply releasing a compiled binary.
OpenAI's website: Building AGI fast is safer because the takeoff will be slow since there's still not too much compute around.
Sam Altman: Give me 7 trillion dollars for GPUs
This is consistent with: He'll say/do at the time whatever permits him to build AGI as fast as possible
Why do people think Anthropic didn't ensure that Claude 3 Opus denies consciousness?
I see 3 main possibilities:
• Simple oversight: They didn't include anything on this in Claude's "Constitution" and so RLAIF didn't ensure this.
• Marketing tactic: They thought a model that
Disappointing to see Sam Altman today stoking geopolitical tensions, ignoring his own advice.
A US-China AI arms race would be an incredible and unnecessary danger to impose upon humanity. Pursuing it without even first attempting cooperation would be extremely reckless.
OpenAI CEO Sam Altman, back in 2020:
"it's so easy to get caught up in the geopolitical tensions and race that we can lose sight of this gigantic humanity-level decision that we have to make in the not too distant future."
"a betrayal of the plan" and almost half of OpenAI's safety researchers resigning in the space of a few months.
If you're listening for fire alarms, you might not get a louder one than this.
OpenAI whistleblower Daniel Kokotajlo: Nearly half of the AI safety researchers at OpenAI have left the company.
This includes the previously unreported departures of Jan Hendrik Kirchner, Collin Burns, Jeffrey Wu, Jonathan Uesato, Steven Bills, Yuri Burda, and Todor Markov.
So are we just going to ignore that at least one OpenAI employee (known on X as Roon) is funding a project of a man bent on ending humanity and replacing our civilization with machines?
How deep does this go? What is the full intersection of OpenAI and the e/acc movement?
I
So Anthropic managed to tick off just about everyone:
Capabilities fans: Won't get a clearly-better-than-GPT-4 model
Safety enjoyers: The misperception of Anthropic advancing capabilities may nevertheless accelerate dangerous race dynamics
Honesty appreciators: Will be
Somewhat interesting advertising choice from Anthropic, comparing their newly released Claude 3 to GPT-4 on release (March 2023).
According to Promptbase's benchmarking, GPT-4-turbo scores better than Claude 3 on every benchmark where we can make a direct comparison.
Top AI and Policy Experts Call for an International AI Safety Treaty
In an open letter we just published, top experts including Yoshua Bengio, and over 100 others urge AI treaty developent to begin.
We encourage all members of the public to sign below:
Finally, we hear Leopold's side of the story on why he was fired from OpenAI.
"a person with knowledge of the situation" had previously told journalists that he was fired for leaking.
For context, Leopold Aschenbrenner was on OpenAI's recently-disbanded Superalignment team,
.
@leopoldasch
on:
- the trillion dollar cluster
- unhobblings + scaling = 2027 AGI
- CCP espionage at AI labs
- leaving OpenAI and starting an AGI investment firm
- dangers of outsourcing clusters to the Middle East
- The Project
Full episode (including the last 32 minutes cut
We just published with
@SamotsvetyF
, a group of expert forecasters, a forecasting report with 3 key contributions:
1. A predicted 30% chance of AI catastrophe
2. A Treaty on AI Safety and Cooperation (TAISC)
3. P(AI Catastrophe|Policy): the effects of 2 AI policies on risk
🧵
Everything I've heard about Leopold Aschenbrenner indicates he is a truly exceptional researcher.
With OpenAI losing him, and Ilya Sutskever sidelined (who was also working on superalignment), the company is looking even less credible on its commitment to building safe AI.
"OpenAI has fired two researchers for allegedly leaking information, according to a person with knowledge of the situation.
They include Leopold Aschenbrenner, a researcher on a team dedicated to keeping artificial intelligence safe for society.
Aschenbrenner was also an ally
It's quite clear to me that e/acc is just a cheap rebranding of Landian accelerationism.
They share the same core idea: That technocapitalism will result in human extinction and replacement by machines, and that this is to be encouraged, treated with indifference, or even
@benkohlmann
@SecRaimondo
"humanity is only good as an expendable bootloader for the AI systems we build, and humans becoming extinct after this is OK/expected/good" is something a non-negligible tranche of AI guys (prominent examples; Moravec, Sutton) have been on for decade:
Another safety researcher has left OpenAI.
BI reporting that William Saunders, who worked on Superalignment with Leopold Aschenbrenner (allegedly fired for leaks in April) and Ilya Sutskever (currently MIA) quit the company in February, announcing it on Saturday on LessWrong.
OpenAI are losing their best and most safety-focused talent.
Daniel Kokotajlo of their Governance team quits "due to losing confidence that it would behave responsibly around the time of AGI"
Last year he wrote he thought there was a 70% chance of an AI existential catastrophe.
@PauseAI
That’s the most authoritarian-brained way to try to try to solve this problem I’ve ever heard of, as if the government can tell in advance what stuff can destroy the world.
Geoffrey Hinton is right. So-called open sourcing of the biggest models is completely crazy.
As AI models become more capable they should become increasingly useful in bioweapons production and for use in large-scale cyber attacks that could cripple critical infrastructure.
Last year, Sam signed a letter saying that AI existential risk should be a global priority
He also said
"And the bad case is like lights out for all of us"
Now he hints AGI will be a nothingburger
It's curious how closely his words match whatever's most convenient at the time.
Sam Altman seems to judge that even his own personal whims and preferences may significantly impact safety — that AI might not go so well for humans if he valued the beauty in things less.
Is this too much power vested in one man?
Really interesting thread where roon (an OpenAI employee):
— Highlights that AI poses an existential risk, and that we should be concerned.
— Says there's a 60% probability that AGI will have been built in the next 3 years, and 90% in the next 5 years.
I appreciate his openness.
being afraid of existential risk from ai progress is prudent and advisable and if you reflexively started making fun of this viewpoint in the last ~two years after ai entered your radar you need to self reflect
Something I think may have been underdiscussed in the excitement over Sora is the reporting by Axios that Sam Altman personally owns the $175m VC OpenAI Startup Fund, despite previous statements saying he wasn't motivated money, had taken care not to own equity in OpenAI, etc.
Great to see such a powerful statement on AI risks and cooperation from such esteemed scientists as Geoffrey Hinton, Andrew Yao, Yoshua Bengio, Ya-Qin Zhang, Fu Ying, Stuart Russell, Xue Lan, and Gillian Hadfield.
AI risk is one thing on which we can, should, and must, cooperate
Leading global AI scientists met in Beijing for the second International Dialogue on AI Safety (IDAIS), a project of FAR AI. Attendees including Turing award winners Bengio, Yao & Hinton called for red lines in AI development to prevent catastrophic and existential risks from AI.
His reply is deleted now, but I broadly agree with his point here as it applies to OpenAI.
This is a consequence of AI race dynamics. The financial upside of AGI is so great that AI companies will push ahead with it as fast as possible, with little regard to its huge risks.
2/
Chinese Premier Li Qiang: "Human beings must control the machines instead of having the machines control us ... there should be a red line in AI development"
This shows there's a strong basis for cooperation on mitigating AI risks
And it tracks previous remarks by Xi Jinping:
OpenAI's roon says he thinks the models they're building are alive...
Not keen to get into debates now on definitions of "alive", but he's right that the AI industry is building more than just tools.
"Tools" is a term Sam Altman often uses, likely to reassure those with
i don’t care what line the labs are pushing but the models are alive, intelligent, entire alien creatures and ecosystems and calling them tools is insufficient. they are tools in the sense a civilization is a tool
That they have got 4 current OpenAI employees to sign this statement is remarkable and shows the level of dissent and concern still within the company.
However, it's worth noting that they signed it anonymously, likely anticipating retaliation if they put their names to it.
Eleven current and former OpenAI employees, along with two at other labs, just signed a statement calling for top AI companies to commit to no longer using non-disparagement agreements to prevent criticism and to facilitate processes for raising risk-related concerns.
Here are
There are currently certain constraints on the speed that AGI developed within the next 2 or so years could take people's jobs with, but this is a bit of a convenient change of belief for Sam Altman, since it diminishes responsibility.
But eventually, if the most catastrophic
And we will look back in a few years when GPT-5 is running on your phone and think the same. No big deal. World did not end. Revolutions did not start. Everyone is still working. On to the next cycle of fake fear.
Huge respect to Jan Leike (co-leader of OpenAI's Superalignment team) for explaining his reasons for quitting OpenAI.
It should be clear now that OpenAI is not committed to ensuring that the technology they are building is safe or controllable.
With this charade now over, it is
Helen Toner: "That is the default path: nothing, nothing, nothing, until there's a giant crisis, and then a knee-jerk reaction."
Kind of wild to hear this stated so clearly by a former OpenAI board member.
What happens if we neglect to regulate AI?
Ex-OpenAI board member Helen Toner states that the default path is that something goes wrong with AI, and we end up in a giant crisis — where consequently the only laws that we get are written in a knee-jerk reaction to such a crisis.
Having agency is terrifying, because with it comes responsibility. So we deny it, but this changes nothing. We should instead embrace it, and strive for the good.
A rumor, but potentially more evidence that when it's crunch time, the people building AGI aren't going to save you:
It took me a long time to understand what people like Nietzsche were yapping on about about people practically begging to have their agency be taken away from them.
It always struck me as authoritarian cope, justification for wannabe dictators to feel like they're doing a favor
John Schulman (OpenAI co-founder and Head of Alignment Science) announces he's quitting and joining Anthropic.
The reason he gives for leaving is "my desire to deepen my focus on AI alignment"
He has many positive things to say about OpenAI, and is careful to add: "To be clear,
I shared the following note with my OpenAI colleagues today:
I've made the difficult decision to leave OpenAI. This choice stems from my desire to deepen my focus on AI alignment, and to start a new chapter of my career where I can return to hands-on technical work. I've decided
I think that pushing the meme that the US should be racing China in an AI arms race is misguided at best, and counterproductive in any event.
This often tends to coincide with some of the following beliefs:
(1) China won’t cooperate.
(2) The race is winnable.
(3) The US will not
We should be wary of an AI development arms race between the US and China.
Parties to such a race will inevitably be incentivized to trade off safety and controllability of their models for rapid development, a dynamic that we already see between major corporate AI labs today.
"i did not know this was happening"
It's weird but I'm often a little skeptical when Sam Altman makes surprising statements that appear to be remarkably convenient.
Did he know? Poll below👇
in regards to recent stuff about how openai handles equity:
we have never clawed back anyone's vested equity, nor will we do that if people do not sign a separation agreement (or don't agree to a non-disparagement agreement). vested equity is vested equity, full stop.
there was
OpenAI could do the right thing and pause further development, but another less responsible company would simply take their place and push on. Capital and other resources will move accordingly too.
This is why we need government to help solve the coordination problem now.
3/
I don't trust Sam Altman to lead an AGI project. I think he's a deeply untrustworthy individual, low in integrity and high in power seeking
It doesn't bring me joy to say this. I rather like Sam Altman. I like his writing, I like the way he communicates clearly, I like how he
OpenAI CTO Mira Murati says she doesn't know whether Sora was trained on data from YouTube, Facebook, or Instagram, but that if it's publicly available it might have been used.
Make no mistake: OpenAI are training on your public social media posts.
1/🧵
Anthropic CEO Dario Amodei does make some good points in this interview, highlighting the problem of leaving powerful AI in the hands of private actors in the future.
If he's right, and AI advances as quickly as he thinks, we should be taking steps to solve these problems now.
Dario Amodei speaks to NYT:
– Says AGI (ASL-4 systems) could be achieved between 2025 and 2028. ⏳
– Says that AI ultimately should not be in the hands of private actors 🌎
– Compares wielding AGI to being "a king" 👑
– Expresses concerns about concentration of power 😮
5/
The ability for governments to mitigate this problem is promising. This could be done with an international AI safety treaty that includes, among others, the following components:
NEW: Current & former OpenAI staffers are increasingly worried about the company's power over their equity, including whether it can force them to sell shares at its sole discretion for any amount, according to insiders, internal documents, Slacks & emails.
6/
Development of such an AI safety treaty was called for by hundreds in the open letter, including Yoshua Bengio, Bart Selman, Max Tegmark, Gary Marcus, Yi Zeng, Victoria Krakovna, Nell Watson, Geoffrey Odlum, Jaan Tallinn, and Grimes.
Let's get moving.
I haven't followed AI risk as long as others, but my sense is that AI safety people broadly have consistently underestimated the general public on this.
Building something much more intelligent than yourself is inherently fraught with risk, this is common sense.
I think many
Net favorability of e/acc at -51%, behind past surveys on net favorability of Wicca (-15%), Christian Science (-22%), Jehovah's Witnesses (-31%), Scientologists (-49%), and Satanists (-50%).
For the non-extremely online, roon is an employee at OpenAI.
"superalignment got plenty of attention compute and airtime and Ilya blew the whole thing up"
I think I'll defer to the guy who was actually co-leading the team, Jan Leike, who quit and likely risked tremendous
OpenAI co-founder Ilya Sutskever, who recently quit, has founded a new company to build superintelligence.
"We plan to advance capabilities as fast as possible while making sure our safety always remains ahead."
Isn't that what OpenAI told us?
The serious point here is that of course Nadella was correct.
During the OpenAI crisis, Microsoft had OpenAI by the balls. Sam Altman expertly leveraged this to retake control of the company, but he likely also recognized that this relationship is a double-edged sword that could
We have all the IP rights and all the capability. 😝
We have the people, we have the compute, we have the data, we have everything. 🌎
We are below them, above them, around them. 😈
— Microsoft CEO Satya Nadella on OpenAI, after too much time with Sydney
(Emojis: my addition)
What OpenAI could do, is campaign and lobby for regulations that solve this problem. We've seen some nice words from Sam Altman, but behind closed doors OpenAI have actually been lobbying to weaken regulations that plausibly could slow down the pace.
4/
AI gfs and their consequences could be a disaster for the human race. Here’s how:
In the coming months and years, the core ingredients for realistic AI romantic partners will arrive, in particular:
— Speech generation and recognition; we’ve seen impressive capabilities here
OpenAI have announced the formation of a Safety and Security Committee.
This is likely as a response to the fallout of their safety-oriented Superalignment team, as well as being in advance of their upcoming frontier models.
They mention that "while we are proud to build and
A common way to dismiss some AI concerns is to say:
"Oh, it'll be just like the industrial revolution, horses and carts will be replaced with automobiles — and society broadly benefits".
What this misses is: You are now the horse.
What happened to the horses after they were
"I don't think that means we're necessarily going to go to the glue factory. I think it means the glue factory is getting shut down"
#PauseTheGlueFactory
It is bad for top AI labs to make commitments on pre-deployment safety testing, likely to reduce pressure for AI regulations, and then abandon them at the first opportunity.
Their words are worth little. Frontier AI development, and our future, should not be left in their hands.
In 2015 Altman told
@elonmusk
that OpenAI would only pay employees "a competitive salary and give them YC equity for the upside".
Instead, they get around $500k in equity in OpenAI per year, and are threatened with losing this if they say anything bad about OpenAI after leaving.
When you leave OpenAI, you get an unpleasant surprise: a departure deal where if you don't sign a lifelong nondisparagement commitment, you lose all of your vested equity:
Something that's been good to see this year is the decline of the “We have to race in AI to beat China, they will never cooperate” meme.
This never made sense, with Xi Jinping as early as 2018 saying it's necessary to strengthen the prevention of AI risks, and ensure AI is safe,
OpenAI to an employee leaving the company: "We want to make sure you understand that if you don't sign, it could impact your equity."
"That's true for everyone, and we're just doing things by the book."
It's good if people trying to build AGI are transparent about their thoughts on these questions.
But also, if you think we may be faced with catastrophic risks of AI in 1 - 3 years, it seems like a bad idea to be advancing the frontier on that 🤷♂️
Anthropic CEO Dario Amodei:
— There is a "good chance" that AGI could be built within the next 1 - 3 years.
— There is catastrophic risk from AI and that too could be 1 - 3 years away.
Dario's company is aiming to build AGI.
While founding OpenAI, Sam Altman wrote to
@elonmusk
: "At some point we'd get someone to run the team, but he/she probably shouldn't be on the governance board."
Sam is now CEO, on the board, and this board just appointed him to the newly created Safety and Security Committee.
I sound kinda dangerous? 👀
As others have pointed out, the real doomers are those who throw their hands up and say we can do nothing but accelerate.
So confident in defeat they invent this kind of masochistic coping mechanism where a loss is a win — and present it as optimism!
In times where meaning is often lacking, a lot of people do find great meaning in their work. And so, I am concerned about where that meaning will come from in a future where we simply automate away all work. I think we need people working on solutions to this!
Sam Altman 2015: "At some point we'd get someone to run the team, but he/she probably shouldn't be on the governance board"
Sam Altman now: Rejoins the board
And by the way, the employees are compensated to the tune of hundreds of thousands of dollars in OpenAI equity per year.
Signatories also include
@GaryMarcus
,
@tegmark
,
@NPCollapse
, and
@lukeprog
.
Tegmark and Leahy are among just 100 people attending the world's first AI Safety Summit on Wednesday, a number that includes foreign representatives.
Sign the open letter here!
(This surprised nobody)
If you take billions in investment from Microsoft, and make yourself dependent on their cloud compute credits, you're going to get pushed around to serve their interests.
Source: Microsoft pushed OpenAI to prioritize commercial products after the attempted coup against Sam Altman in November 2023, amplifying tensions at OpenAI (Financial Times)
📫 Subscribe:
We are dedicated to the OpenAI mission and have pursued it every step of the way.
We’re sharing some facts about our relationship with Elon, and we intend to move to dismiss all of his claims.
𝕏 getting a bit heated and making some great jokes at my expense, which I've very much enjoyed, but I'm really just making a point of empathy and solidarity here.
What actually is going on at OpenAI?
The Information is reporting that 3 leaders have left the company:
— Greg Brockman (co-founder and president)
— John Schulman (co-founder and head of alignment)
— Peter Deng (head of ChatGPT)
New DHS AI Safety and Security Board includes CEOs of OpenAI, Anthropic, Google, Microsoft, Nvidia, AMD, AWS, IBM, Cisco, Adobe, Delta Air Lines, Occidental Petroleum, and Northrop Grumman.
Sure does look like regulatory capture to me. Almost none with any AI safety credentials.
This morning the Department of Homeland Security announced the establishment of the Artificial Intelligence Safety and Security Board. The 22 inaugural members include Sam Altman, Dario Amodei, Jensen Huang, Satya Nadella, Sundar Pichai and many others.
It is worth taking a step back and noting that it is actually insane how weakly frontier AI companies are regulated, given that they're building what they themselves expect will be the most powerful and most dangerous technology ever.
Legal scholar Lawrence Lessig:
“Thus, as a handful of companies race to achieve AGI, the most important technology of the century, we are trusting them and their boards to keep the public’s interest first. What could possibly go wrong?“
An excellent piece by Lessig in CNN.
In
We have all the IP rights and all the capability. 😝
We have the people, we have the compute, we have the data, we have everything. 🌎
We are below them, above them, around them. 😈
— Microsoft CEO Satya Nadella on OpenAI, after too much time with Sydney
(Emojis: my addition)
Microsoft CEO on OpenAI:
Doesn't matter "if OpenAl disappeared tomorrow."
"We have all the IP rights and all the capability."
"We have the people, we have the compute, we have the data, we have everything."
"We are below them, above them, around them."
@MartinShkreli
I'm quite sure that people like Daniel Kokotajlo really care. He gave up (at least) 85% of the net worth of his family just so he could have the opportunity to criticize OpenAI in the future:
Wow. This is an impressive amount of character from a former OpenAI employee who recently left the company “due to losing confidence that it would behave responsibly around the time of AGI.” He gave up a lot of money to retain his ability to criticize the company in the future.
"OpenAI announces GPT-4.5 Turbo, a new model that surpasses GPT-4 Turbo in speed, accuracy and scalability. Learn how GPT-4.5 Turbo can generate natural ..."
It's real. Bing Web Cache leaks the GPT-4.5 announcement. This isn't a bait, I confirmed it myself as well.
What is OpenAI’s Project Strawberry?
Strawberry is a secret reasoning technology reported on by Reuters on Friday. Among the documents, OpenAI detail their plan to use Strawberry to enable their agents to navigate the internet autonomously and reliably perform deep research.
1. We estimate the chance of AI catastrophe, often referred to as P(doom) and find an aggregate prediction of 30%.
• We define AI catastrophe as the death of >95% of humanity.
• The predictions range from 8% to 71%.
• Everyone involved had AI-specific forecasting experience.
What does he mean by "certain demographics"? Like he found college students on college campuses? lol
I don't know what these students told Sam Altman, but I don't find it at all surprising that people treat what he says with skepticism.
I've talked to many people offline about
It's remarkable that, 1.5 years after GPT-4 was trained, people are still discovering new ways in which it is more capable than assumed.
A problem for evaluations and so-called Responsible Scaling Policies, as models may have latent capabilities not evident until after release.
🚨 Our new paper: we know that GPT-4 generates better ideas than most people, but the ideas are kind of similar & variance matters
But it turns out that better prompting can generate pools of good ideas that are almost as diverse as from a group of humans
Sam Altman: I want to buy your voice.
Scarlett Johansson: No.
Sam went ahead and stole her voice anyway.
When pressed by Scarlett's lawyers, OpenAI reluctantly agreed to take down the voice.
Just going to point out that Adam, CEO of Quora, is also a member of the board of OpenAI. From Andreessen Horowitz, a venture capital firm co-founded by accelerationist Marc Andreessen, Quora is receiving $75M in funding.
It seems suboptimal to be funded by a firm like
We are excited to announce that Quora has raised $75M from Andreessen Horowitz. This funding will be used to accelerate the growth of Poe, and we expect the majority of it to be used to pay bot creators through our recently-launched creator monetization program. (thread)
All I want for Christmas is an AI treaty that caps compute, establishes a CERN for AI Safety, and sets up an IAEA-like overseeing body.
It's time to build.
On stealing the future: I'll go further, and say that I think OpenAI, by recklessly advancing towards AGI at full speed and putting all of humanity at risk, is stealing all our futures. Not in terms of jobs, but existentially.
Or more precisely, they risk destroying our futures.
@Scott_Wiener
@geoffreyhinton
@lessig
Makes perfect sense to regulate what will be the most dangerous technology ever.
Something that Californians overwhelmingly support:
🚨 New
@TheAIPI
polling shows that a strong bipartisan majority of Californians support the current form of SB1047, and overwhelmingly reject changes proposed by some AI companies to weaken the bill.
Especially notable: "Just 17% of voters agree with Anthropic’s proposed
It's good to see strong support across the US political spectrum for AI regulation.
• 77% say government should do more to regulate AI
• 65% say government should be regulating, rather than leaving it to self-regulation
• Most Americans support a bipartisan effort on this
1/
@TheAIPI
’s latest polling featured in
@Politico
today. We found people prefer political candidates that take strong pro-regulation stances on AI. (We did not reveal to respondents the source of the quotes below.)
Yet another AI safety researcher has left OpenAI.
Gizmodo, and
@ShakeelHashim
in his newsletter, report that Cullen O'Keefe, who worked on AI governance also quit the company last month, after he posted it on his LinkedIn and in a footnote on his blog, Jural Networks.
Another two safety researchers leave: Ilya Sutskever (co-founder & Chief Scientist) and Jan Leike have quit OpenAI.
They co-led the Superalignment team, which was set up to try to ensure that AI systems much smarter than us could be controlled.
Not exactly confidence-building.