TheMidasProj Profile Banner
The Midas Project Profile
The Midas Project

@TheMidasProj

Followers
1K
Following
291
Media
61
Statuses
270

The Midas Project is a watchdog collective taking action to ensure that AI benefits everyone. Also tracking safety updates @SafetyChanges

Joined October 2023
Don't wanna be here? Send us removal request.
@TheMidasProj
The Midas Project
2 months
The Midas Project commends Attorneys General Kathy Jennings and Rob Bonta for their diligent work over the past year. That said, significant concerns remain about whether this restructuring adequately protects the @OpenAI mission, and the public. https://t.co/CtnDGOkjPF
themidasproject.com
The Midas Project commends Attorneys General Kathy Jennings and Rob Bonta... but significant concerns remain about whether this restructuring adequately preserves OpenAI's founding commitments to...
2
11
74
@FLI_org
Future of Life Institute
24 days
🆕"If the final input at the end of the day that informs regulation is what the public wants and who they vote for, then at some point the money stops working for you." -@TheMidasProj's @TylerJnstn on the FLI Podcast w/ @GusDocker, discussing how to hold Big AI accountable 🔗👇
1
9
13
@_NathanCalvin
Nathan Calvin
2 months
The Not For Private Gain folks (incl. me) have a new statement on OpenAI’s restructuring. It’s significantly better than OpenAI’s original proposal, which might not be obvious from OpenAI’s announcement (which omits the 20 concessions extracted by the AGs). 🧵
6
16
90
@robertwiblin
Rob Wiblin
2 months
My questions (all not clear from their blog post): • Have the attorneys general approved this plan? • In what sense will the foundation 'remain in control' of the Public Benefit Corporation, other than the ability to hire and fire PBC directors? • What will the PBC do to
@OpenAI
OpenAI
2 months
LIVE at 10:30am PT: The future of OpenAI and Q&A with @sama and @merettm Bring your questions. https://t.co/EOvjGJsf0R
7
20
136
@TheMidasProj
The Midas Project
2 months
In case the subpoenas to nonprofits weren't bad enough: @CristinaCriddle reports that OpenAI asked the grieving family of Adam Raine, who died by suicide after support from ChatGPT, for a full list of invitees to the funeral and all photos and eulogies. Ghastly stuff.
@CristinaCriddle
Cristina Criddle
2 months
OpenAI has sent a legal request to the family of Adam Raine, the 16yo who died by suicide following lengthy chats with ChatGPT, asking for a full attendee list to his memorial, as well as photos taken or eulogies given. His lawyers told the FT this was "intentional harassment"
0
7
28
@haydenfield
Hayden Field
2 months
Inside how OpenAI’s legal battle with Elon Musk has caught company critics in the crossfire, according to sources, legal experts, and the nonprofits themselves. https://t.co/NiJdniYgvZ
Tweet card summary image
theverge.com
OpenAI’s legal battle with Elon Musk has caught company critics in the crossfire.
2
5
17
@_perloj
Jared Perlo
2 months
NEW: Three more nonprofits subpoenaed by OpenAI allege the requests were unusually broad and concerning. All of the nonprofits had been critical of OpenAI's plans to reorganize from a nonprofit to a for-profit company. https://t.co/oyaY4XJaMH
Tweet card summary image
nbcnews.com
Seven nonprofit groups that have criticized OpenAI say it sent them wide-ranging subpoenas as part of its litigation against Elon Musk.
1
62
260
@michaelhpage
page
2 months
In defense of OAI’s subpoena practice, @jasonkwon claims this is normal litigation stuff, and since Encode entered the Musk case, @_NathanCalvin can’t complain. As a litigator-turned-OAI-restructuring-critic, I interrogate this claim:🧵
@jasonkwon
Jason Kwon
2 months
There’s quite a lot more to the story than this. As everyone knows, we are actively defending against Elon in a lawsuit where he is trying to damage OpenAI for his own financial benefit. Encode, the organization for which @_NathanCalvin serves as the General Counsel, was one
8
44
272
@TylerJnstn
Tyler Johnston
2 months
@BorisMPower Tell me again how the truth doesn't fit a David vs. Goliath narrative? Kwon's Musk excuses certainly don't count, because as we've said, he hasn't funded us and we've never been remotely involved with his legal battle.
2
4
57
@TylerJnstn
Tyler Johnston
2 months
I, too, made the mistake of *checks notes* taking OpenAI's charitable mission seriously and literally. In return, got a knock at my door in Oklahoma with a demand for every text/email/document that, in the "broadest sense permitted," relates to OpenAI's governance and investors.
@_NathanCalvin
Nathan Calvin
2 months
One Tuesday night, as my wife and I sat down for dinner, a sheriff’s deputy knocked on the door to serve me a subpoena from OpenAI. I held back on talking about it because I didn't want to distract from SB 53, but Newsom just signed the bill so... here's what happened: 🧵
183
1K
5K
@TheMidasProj
The Midas Project
3 months
@AndrewCurran_
Andrew Curran
3 months
Wired is reporting that OpenAI is preparing to launch a stand-alone social media app for Sora 2. The app is a vertical video feed with swipe-to-scroll navigation, just like TikTok, except the content of this app is 100% AI-generated.
1
2
18
@TheMidasProj
The Midas Project
3 months
Today's AI is the weakest, least important, and least dangerous that the technology will ever be. The time to get our safety policies right, and to prove that they pass muster, is now. The easiest time to comply with them is now. xAI has proven they aren’t up for the task.
0
0
8
@TheMidasProj
The Midas Project
3 months
All AI companies have struggled to comply with their commitments. Few have done so as egregiously as xAI. It’s true that Grok Code Fast 1 will probably not prove catastrophic. But the value of a promise is in its credibility, and xAI decided to throw that out the window.
2
0
3
@TheMidasProj
The Midas Project
3 months
And, to top it all off, their promises were remarkably weak to begin with. Zach Stein-Perlman, the creator of AI Lab Watch, points out several “huge problems” with the attempt to bound loss of control risk using MASK benchmarking alone:
1
0
4
@TheMidasProj
The Midas Project
3 months
The timing here is absurd. It took xAI one week to flagrantly violate a safety policy it implemented (which was tied to a promise made to international governments!) xAI’s six-month-late safety promises only survived for a single week before the company broke them.
1
0
4
@TheMidasProj
The Midas Project
3 months
Not only is such a loophole absent from their promise, but it’s clearly absurd: Loss of control doesn't just come from chatbots. If anything, it seems more likely in agentic applications where models are given the power and independence to take many unsupervised steps.
1
0
4
@TheMidasProj
The Midas Project
3 months
Why is xAI still releasing a model that exceeds the risk threshold they defined only one week before? According to the model card, it’s because the model will be used more for “agentic coding applications” rather than as an everyday assistant.
1
0
4
@TheMidasProj
The Midas Project
3 months
Fast forward one week and xAI released “Grok Code Fast 1” a new AI model specialized for agentic applications. What did it score on MASK? 71%. Grok Code Fast 1 lies nearly ¾ of the time on the MASK benchmark.
1
0
6
@TheMidasProj
The Midas Project
3 months
xAI said that their risk threshold for deploying models meant requiring the model to maintain a less than 50% dishonesty rate on MASK. In other words, models that lied more often than not on the benchmark wouldn’t be deployed.
1
0
4