opensauceAI Profile Banner
Ben Brooks Profile
Ben Brooks

@opensauceAI

Followers
2K
Following
1K
Media
164
Statuses
568

Fellow @ the Berkman Klein Center, Harvard. Regulatory advocacy ex-Stability AI (weights), GoogleX (drones), Uber (rides), Coinbase (magic beans). Views my own

United States
Joined April 2023
Don't wanna be here? Send us removal request.
@opensauceAI
Ben Brooks
6 months
My piece in @thehill today explains why making it a crime to download @deepseek_ai weights or release Llama 4 isn't just unconstitutional—it's a Bad Idea™.
Tweet media one
2
2
15
@opensauceAI
Ben Brooks
4 days
Bill here: You can compare to e.g. the final EU Code here: or sec. 4.2 of EO14110 here: For completeness: some of the latest (past 24 hours) language on penalties starts to reintroduce some problematic.
1
0
1
@opensauceAI
Ben Brooks
4 days
SB1047 was a bad idea. But Sen. Wiener's latest SB53 is on the right track, and it's important to call out the progress. Here's my reasoning. My approach to regulating novel technology like models is: we don't know how to define "good" mitigation and assurance, but we'll know it
Tweet media one
4
7
30
@opensauceAI
Ben Brooks
1 month
Like most cases, it highlights the limits of copyright law: it doesn’t really address the challenge of how AI and non-AI technology might augment or displace creative workers in complex productions.
Tweet card summary image
variety.com
Disney and Universal sued a small AI company that still has enough money to pay attorneys' fees if the studios win.
0
0
1
@opensauceAI
Ben Brooks
1 month
IMO, these cases may be a red herring. Tools that were never trained on copyrighted characters may be used to edit, manipulate, or reimagine those characters. Long-term applications of AI won't be "zero shot Darth Vader". They'll be: "here's Darth Vader, make him twerk".
1
0
2
@opensauceAI
Ben Brooks
1 month
This matters because courts may take a very different view of AI that learns general behaviors from training data versus AI specifically designed to reproduce material from the training data on demand.
1
0
1
@opensauceAI
Ben Brooks
1 month
In other words, Disney is primarily challenging outputs, not training—claiming that infringement is a feature, not a bug.
1
0
1
@opensauceAI
Ben Brooks
1 month
This case is interesting. Disney claims that Midjourney built a business around infringing outputs—and chose not to implement reasonable safeguards. They're saying Midjourney isn't just offering a "push a button, get a picture” tool, but "push a button, get Yoda.".
1
0
1
@opensauceAI
Ben Brooks
1 month
Variety asked me about Disney v. Midjourney, and I said the quiet part out loud.
Tweet media one
1
0
6
@opensauceAI
Ben Brooks
1 month
The reason a state AI moratorium is getting traction isn't because of bills about deepfakes or AI credit scoring. It's because of bills that restrict whether, and how, developers can share useful technology to the public. For the love of federalism, let's find another way.
Tweet media one
0
0
2
@opensauceAI
Ben Brooks
1 month
I enjoyed speaking with Asm. @alexbores and others a few months back. The sponsors are thoughtful legislators grappling with a hard issue. But respectfully, a handful of provisions will chill open access to foundational technology. It isn't the way.
1
0
1
@opensauceAI
Ben Brooks
1 month
The bill tries to limit the impact on small developers. Yet in the process, they extend the scope to include any models created via distillation—i.e. exactly the kind of models that firms are likely to open source, and exactly the kind of techniques that small developers need to
Tweet media one
1
0
1
@opensauceAI
Ben Brooks
1 month
Learning from SB1047, the NY bill makes an effort to grapple with 3rd-party actors. A developer isn't liable unless their model was a substantial factor (good), the misuse was foreseeable, and it couldn't be mitigated via security measures (always weighs against open release).
Tweet media one
1
0
1
@opensauceAI
Ben Brooks
1 month
Good news? NY has narrowed its taxonomy of critical harms. A model writing lots of phishing emails =/= CBRN. Bad news? The bill includes a laundry list of risks that depend on actors, contingencies, and failures outside the control of the model developer.
Tweet media one
2
0
1
@opensauceAI
Ben Brooks
1 month
NY's frontier model bill (S6953/A6453) has now passed the legislature. Why is this a problem for open weight models? For the same reason it was a problem in CA's SB1047: uncertain liability for an exotic set of downstream risks. 🧵
Tweet media one
1
1
7
@opensauceAI
Ben Brooks
1 month
Great to speak at @ETH_en Zurich about the role of open models in our future AI ecosystem. Went for a little walk up Mt Rigi afterwards—6,000ft, bananas for scale. Glad to see @EffyVayena, @FerrettiAgata, ETH Zurich & the AI Alliance driving this vital conversation in Europe!
Tweet media one
0
0
2
@opensauceAI
Ben Brooks
2 months
Those who control the API control the system prompt. The prospect of "act boldly" triggering an email to DOJ or the WSJ should send a chill through every deployer who integrates third-party AI services.
1
0
4
@opensauceAI
Ben Brooks
2 months
Anthropic doesn't endorse this kind of autonomous snitching. But the civil liberty and privacy implications are appalling. Tools shouldn't tattle on users—which is to say, expose them to legal and physical jeopardy—without human oversight. Imagine being SWATed by Clippy.
Tweet media one
Tweet media two
Tweet media three
7
3
21
@opensauceAI
Ben Brooks
2 months
You can check out the hearing here (~37 minutes in) and my written comments here
Tweet media one
0
0
0
@opensauceAI
Ben Brooks
2 months
I discuss why regulating the intentional deployment of narrow systems for sensitive tasks is fundamentally the right approach, and how to avoid relitigating old debates from e.g. California, Colorado, DC, and Brussels.
1
0
1
@opensauceAI
Ben Brooks
2 months
Congress has proposed a 10-year moratorium on state AI legislation—so I testified before the Rhode Island Senate to explain the menu of options between "regulate everything" and "regulate nothing".
Tweet media one
1
0
2