
Dean W. Ball
@deanwball
Followers
12K
Following
15K
Media
428
Statuses
7K
Senior Policy Advisor for AI and Emerging Technology, White House Office of Science and Technology Policy | Strategic Advisor for AI, @NSF (opinions my own)
Joined April 2009
Can you imagine what would have happened if someone had discovered “do not criticize Sam Altman or Joe Biden” in an OpenAI system prompt?.
results replicated 😬. any comment, @grok? . if this has indeed been made part of the sys instructions, we’re gonna need to know why. manipulating search results seems like a funny way to combat misinformation, no?. community could use some clarity on this
43
233
3K
Under a strict reading of the AI Act, ChatGPT advanced voice is *illegal* in EU workplaces and schools because the system can recognize a user’s emotions. That’s prohibited by the AI Act.
Advanced Voice is not yet available in the EU, the UK, Switzerland, Iceland, Norway, and Liechtenstein.
143
353
3K
has anyone else noticed that this guy almost always includes a picture of himself in his tweets.
As of today, thanks to the #DMA, a new app store can be downloaded on iOS & Android devices. Great news for EU users & app developers across the globe who want to do business here. Yes, gamers, Europe means more #FREEDOM & choice! 🎮🇪🇺 . #FreeFortnite .
48
40
2K
“California literally made you” says the state senator to Elon Musk. He says this with a straight face because he believes that government policy is *responsible* for everything good that happens in its jurisdiction. Yet somehow never responsible for the bad.
California literally made you with taxpayer subsidies & because it’s the best place around. Will this be a fake temper tantrum move just like Tesla’s fake “move” to Texas?.
68
83
2K
Remember when Thomas Edison asked everyone in the world for their consent to invent the lightbulb?. Or when Steve Jobs asked every American whether Apple should build the iPhone?.
OpenAI is building tech that aims to totally change the world without asking if we consent. It's undemocratic. And Sam Altman just proved that bespoke corporate structures & voluntary commitments won't cut it — we need LAWS that give independent oversight.
77
78
2K
I am happy to announce that I have joined the White House Office of Science and Technology Policy as a Senior Policy Advisor on AI and Emerging Technology. It is a thrill and honor to serve my country in this role and work alongside the tremendous team @mkratsios47 has built.
169
65
1K
The thing I really appreciate about VP Vance's discussion of manufacturing here is that he makes a *hayekian* case for reindustrialization. He says that by losing manufacturing, we lose the benefits of what he calls "network effects" and specialized urban knowledge clusters.
the full @JDVance keynote from the a16z american dynamism summit today. unbelievable support and commitment to make manufacturing great again. a thread.
31
97
617
One of the few AI papers I reliably hear referenced by DC policy types is from 2023. The authors tried to train a new model with 90%-100% of training data generated by a 125 *million* parameter model. Unsurprisingly, they found that you cannot successfully train a model entirely.
Some ideas get extremely overrated by the sheer virtue of having a cool name (e.g., 'model collapse').
27
48
587
My basic reaction to AI today is, “jeez, o1 performs in the top 1% of humans at math, yet fails routinely at basic logic tasks. I guess intelligence is a high-dimensional space, and that probably means, like most high-dimensional things, it behaves counterintuitively.”.
A key crux in the AI safety debate is Empiricism vs Rationalism in the philosophical sense. Rationalists see nature as lawful, internally consistent and self-similar, which it must be for the universe to be intelligible to human minds in the first place. Empiricists believe.
20
27
483
People who make confident predictions about AI risk should reflect on the fact that the election proceeded with essentially *zero* deepfake or otherwise AI related problems. One year ago people were confidently predicting AI would “dominate” this election.
45
126
451
“We, the California beach NIMBY commission, reject the US military’s request to do things on US military property because we disfavor the politics of your contractor’s CEO.”. Ah yes, federalism. Just as the founders intended it.
NEWS: The California Coastal Commission has rejected the Air Force's plan to allow @SpaceX to launch up to 50 rockets annually from Vandenberg, citing Elon Musk's political posts on X. Both the Air Force and Space Force supported the plan. Insane.
6
21
403
Texas for some ungodly reason is considering a law that would make plumbers, electricians, lawyers, and countless other small businesses file “algorithmic impact assessments” for a huge range of AI uses. As I’ve written before, this is among the worst ways to do AI policy.
Breaking AI news: A troubling new AI regulatory bill has been floated in Texas that borrows from the same heavy-handed, EU-like policy model that we saw implemented in Colorado, and which almost passed in Connecticut. Rightly called a “sweeping” measure in the attached article,.
20
38
382
I can confirm that Deep Research is capable of automating tasks that would have taken me at least a day, if not longer, of dedicated research. This might very well be the most productivity-enhancing technology product for me since GPT 3.5, and it could be bigger.
I am going to be more measured on OAI Deep Research because my off-the-cuff tweets about gemini deep research went viral and ultimately didn’t reflect my full thoughts on that product. But let me just say: this is VERY good.
16
25
351
Like almost all other AI-generated political content, this clip:. 1. Does not demonstrate a need for “AI regulation.”.2. Would almost certainly be unconstitutional for government to regulate. 3. Has not meaningfully affected the information environment.
5
34
280
@YaelOss oh it destroys it. It takes WAY longer but it just looked at like 600 websites and compiled a very high quality report.
4
0
268
The only justification for firing probationary employees is if you think firing government employees is an intrinsic good, regardless of their talent or competence. Indeed, firing probationaries is likely to target younger, more tech and AI-savvy workers.
Firing the newest ("probationary") government employees is a great way to cripple new fields (such as AI).
10
36
263
@aidan_mclau ok so this one is giving away a little bit about a forthcoming piece of mine, but also niche and purely non-technical--in fact, it's nearly a pure humanities question. the prompt was: . "did beethoven write solo piano music that would have been technologically impossible for
18
13
251
My guess: It was obvious by now that models could do stuff like this to everyone who is paying attention and capable of changing their mind, so not big news. The only people left are self-consciously incapable of updating (eg Gary Marcus), so gain nothing by commenting on this.
Anthropic seems to have found evidence of their models thinking via quite nuanced planning as opposed to pure next token prediction, but the reaction to the paper was muted. What am I missing?.
9
9
244
This doesn’t surprise me in the slightest. o1 is a very good legal reasoner and I cannot wait for it to get document upload. People continue to sleep on o1 for reasons that escape my understanding.
OpenAI CFO Sarah Friar says lawyers are reporting that the new o1 reasoning model can do the work of a $2000/hour paralegal
22
13
211
For the record: I expect AI to add something like 1.5-2.5% GDP growth per year, on average, for a period of about 20 years that will begin in the late 2020s. That is *wildly* optimistic and bullish. But I do not believe 10% growth scenarios will come about.
We developed GATE: a model that shows how AI scaling and automation will impact growth. It predicts trillion‐dollar infrastructure investments, 30% annual growth, and full automation in decades. Tweak the parameters—these transformative outcomes are surprisingly hard to avoid.
21
0
208
On an entirely unrelated note, did you know that this guy's company has advocated for making it a felony to open source a gpt-4 class model?. And requiring a government license to train a gpt-3.5-class model?. (in a report commissioned by the US government, by the way)
Within the next 18 months, public opinion will start to turn against open-source AI. This will happen because of one or more highly visible incidents of misuse of an open-source model, probably associated with significant damage or loss of life. 80% confident.
9
23
202
glad they got to interview Gary Marcus for that crucial 2% of ai experts who have never used a chatbot.
New numbers from Pew this morning, they reveal a large gap in perception between the general public and people whose work and research relates to AI. Usage: 66% of the general US public have still never used AI. You probably have a good idea of who the 2 percent of experts are.
9
3
205
There is a certain complacency associated with the belief that “we live in a democracy.”. Yes, you vote. But you also live underneath a vast bureaucratic apparatus that has a logic and a momentum of its own. This piece is about what that machine is doing to AI.
This is how you assert control over the most promising emerging technology in a generation. You do it before it’s popular, before people will notice too much. You do it quietly, behind closed doors in working groups and workshops and steering committees. You do it with the active
7
21
168
Every time the EU releases a new ai policy document I think, “wow, lots of this is quite under-specified and will likely need to be followed up with another policy document.”. And I wonder whether that is in fact the point of this whole endeavor: more work for framework-drafters.
🚨NEW: The European Commission has just published the first draft of the Code of Practice for general-purpose #AI model providers.
14
18
177
I see *nothing* about current language models that justifies something like SB 1047. If that changes with future models, my desired policy regime will change with them. Most 1047 supporters I know agree with me on the first point, but argue that we will be “too late” if we wait.
The irony is that commitment to empiricism used to be a core trait of rationalists. Then AI doom came along and hacked utilitarianism and now all core traits have been jettisoned because in the face of oblivion principles no longer matter. An epistemic ouroboros.
17
14
172
In @TIME this morning, @DKokotajlo67142 and I make the case for frontier AI lab transparency—either as a voluntary commitment or as a law. While we disagree quite a bit about the trajectory of AI, we concur here. Link to the piece, with much more detail, in the reply.
13
24
175
Finally got a chance to read this piece, which argues that o1-style reasoners will not generalize beyond domains with easy verification. It may well be true, but I have some causes for doubting Aidan’s thesis. One I want to highlight in particular:.
i wrote a new essay called. The Problem with Reasoners. where i discuss why i doubt o1-like models will scale beyond narrow domains like math and coding (link below)
5
6
165
Last August @ajeya_cotra asked me what capabilities in future models would change my opinions on prudent policy measures. My response is below. I believe we have now seen what I described, and so my opinions about prudent policy measures have, in fact, changed.
@ajeya_cotra I feel reasonably confident that the labs have figured out grounded mathematical reasoning—so a big question for me is going to be, “does that reasoning translate generally beyond math?”. If it seems like it does—and it need not be perfect—I’ll start to be more convinced.
3
13
155
I do not expect DeepSeek to continue open sourcing their frontier models for all that much longer. I give it 12 months, max.
"DeepSeek’s leaders have been worried about the possibility of information leaking"."told employees not to discuss their work with outsiders". Do DeepSeek leaders and the Chinese government know that DeepSeek has been open sourcing their ideas?
12
16
153
I'm excited to announce that @binarybits and I are launching a podcast! We call it AI Summer, and it will feature interviews with researchers, analysts, and other experts from across the AI world. First up is the ever-excellent @JonAskonas on AI policy in the Trump admin.
6
27
151
This was an unbelievably good event. Truly one of the best I’ve ever attended, on any subject. An exceptionally high-quality and diverse group of people, in the excellent Lighthaven venue. Already excited for next year. Congratulations to @rootsofprogress!.
Announcing Progress Conference 2024: Toward Abundant Futures. Hosted by @rootsofprogress together with @foresightinst @HumanProgress @TheIHS @IFP @WorksInProgMag . Keynotes from @patrickc @tylercowen @jasoncrawford @sapinker. Berkeley, Oct 18–19
5
15
150
@GreatDecoupling I don’t care about what is “symmetrical.” Censorship is censorship. There is no excusing it.
3
0
146
I expect that a frontier model will produce policy briefs more persuasive and well-considered than even a "good" think tank senior fellow or academic within a year--and it could happen as soon as o3-pro in a couple months. o1-pro already can meet this threshold sometimes.
It can be hard to “feel the AGI” until you see an AI surpass top humans in a domain you care deeply about. Competitive coders will feel it within a couple years. Paul is early but I think writers will feel it too. Everyone will have their Lee Sedol moment at a different time.
12
11
148
By the way, @Scott_Wiener has an AI-related bill (SB 53) this session that seems eminently reasonable to me. It:. 1. Creates a committee to study doing CalCompute, a public compute cluster .2. Establishes whistleblower protections for frontier lab employees . The whistleblower
4
15
136
good rebuttal by Ethan below to this really bad oped in the WSJ today. not sure what "inside view" the piece was supposed to represent, but I would say it represents the views of approximately zero AI researchers or executives I've spoken to.
I wish people would stop repeating these as if they are facts that AI is plateauing. AI might hit a roadblock, we don’t know, but every one of these issues has multiple studies stating the opposite: synthetic data works, scaling is fine, etc. We need more nuance on the AI future
7
13
133
It seems possible that America’s compute export controls drove DeepSeek to pursue these radical training efficiencies. Their sophistication may exceed US labs in at least some important ways, though here I am only speculating. Export controls have been known to backfire in.
DeepSeek (Chinese AI co) making it look easy today with an open weights release of a frontier-grade LLM trained on a joke of a budget (2048 GPUs for 2 months, $6M). For reference, this level of capability is supposed to require clusters of closer to 16K GPUs, the ones being.
4
6
132
There you have it. First credible Chinese replication of the OpenAI o1 paradigm, approximately 9 weeks after o1 is released. And it’s apparently going to be open source.
🚀 DeepSeek-R1-Lite-Preview is now live: unleashing supercharged reasoning power!. 🔍 o1-preview-level performance on AIME & MATH benchmarks. 💡 Transparent thought process in real-time. 🛠️ Open-source models & API coming soon!. 🌐 Try it now at #DeepSeek
10
24
128
I concur with @hamandcheese that the correct way to understand DOGE is not as a cost-cutting or staff-firing initiative, but instead as an effort to prepare the federal government for AGI. Trump describing it as a potential "Manhattan Project" is more interesting in this light.
This OPM memo is going to be the most impactful news of the day, but I'm not sure it'll get much reporting.
16
24
125
I cannot stress enough how in the last six months many future “sci-fi” AI capabilities, good and bad, have gone from being speculative or very uncertain on timing to being, basically, a near-term certainty (24 months). When I saw o1-preview, it all became clear.
the vibe shift from “sci-fi future is some distant maybe” to “sci-fi future is within reach” has been abrupt & total—the last *six months* have been like pulling back a curtain & realizing the future was just sitting there, waiting for us to move out of the way. now it’s just a.
1
19
123
The simple fact is that SB 1047 is *speculative* regulation, regulation of a thing that has never happened, of a thing that we do not know will ever happen. A culture with such low risk tolerance will struggle to do interesting things. And boy, will it hate the coming century.
I see *nothing* about current language models that justifies something like SB 1047. If that changes with future models, my desired policy regime will change with them. Most 1047 supporters I know agree with me on the first point, but argue that we will be “too late” if we wait.
10
14
120
This investigation is rooted in the idea that any sufficiently successful corporation is inherently suspicious and worthy of government harassment. This sends an awful sign to entrepreneurs, and is easily the worst tech antitrust investigation I've seen.
BREAKING: Nvidia has been subpoenaed by the US Justice Department in an escalation of the agency's antitrust investigation
6
13
119
Not only this, but European elite attitude toward the US is part of what drives their tech regulation. In the run up to the AI Act, there was a lot of talk along the lines of “we don’t want American values polluting our society.”. European elites do not like us.
I wonder how much Europe's inferiority complex toward the U.S. is holding it back from solving these problems. Post a chart like this, and you get nothing but the wildest cope -- fantasies about how Americans don't have health care or vacations and their cities are hellholes.
7
8
110
I don’t tend to get angry about the things I write about for a living. But the arbitrary and parasitic way in which the European Union extracts money from US tech firms, while our own government applauds them, is one of the few things that does, indeed, make me mad.
How the EU Weaponizes Regulation to Extract Billions from American Tech. With its new Digital Services Act (DSA) and Digital Markets Act (DMA), European Union regulation threatens to significantly cut profit margins necessary for R&D, capital expenditures and other strategic
12
15
108
I do not agree with Miles on many policy prescriptions, but I do agree with him that the policies we enact in 2025 will be hugely important in setting the tone for the development of exceptionally capable ai. This is why I am concerned about profoundly unserious proposed and.
In my new blog post, I share a starter pack for feeling the AGI: resources and arguments indicating that AI is virtually certain to exceed human performance in most areas in the next few years, and that the time to act is now.
5
13
110
sb 1047 subtweet (I agree!). More people should be worried about what happens if Brussels and Sacramento decide US tech policy. The vast majority of Americans would have no say in our own tech policy. This is a sovereignty, and political, crisis—just waiting to happen.
we are happy to have reached an agreement with the US AI Safety Institute for pre-release testing of our future models. for many reasons, we think it's important that this happens at the national level. US needs to continue to lead!.
6
9
101