Will Henshall
@henshall_will
Followers
836
Following
525
Media
37
Statuses
163
AI Governance and Policy @coeff_giving
London, UK
Joined October 2021
Leading AI companies have agreed to share their models with the US AI Safety Institute for pre-deployment testing, says Gina Raimondo. This and more in my profile of the Secretary of Commerce. https://t.co/XWknuRnRwr
time.com
Raimondo’s Commerce Department is leading efforts to maintain U.S. supremacy and develop safety standards as AI advances
0
15
65
I was pretty surprised by their estimates of how much of the cost of developing a frontier AI model is accounted for by researcher compensation. ~30-50% for the four models they made estimates for!
1
0
6
A new report from Epoch AI examines how the cost of the computational power required to train AI systems has been increasing over time. According to their estimates, it's been doubling every nine months
3
19
48
Audrey Tang is stepping back from her ministerial duties to embark upon a world tour to promote the ideas that she helped flourish in Taiwan—ideas captured in Plurality, a book Tang has co-authored with E. Glen Weyl and more than 100 online collaborators https://t.co/QJsIU4nHCh
time.com
TAs Taiwan's digital minister, Tang championed digital democracy. Now she's taking her ideas global.
9
20
63
Microsoft and Amazon are starting to compete with their investees, OpenAI and Anthropic. Can the smaller companies stay ahead of their compute-rich big tech backers? “For the next few years, I don't have concerns about this,” says @jackclarkSF
https://t.co/OBspFAefvI
time.com
Microsoft and Amazon, once merely investors in OpenAI and Anthropic, are now competing by making their own models.
15
12
22
I spoke with Michelle Donelan, Secretary of State for Science, Innovation and Technology, about the AI safety testing agreement she just signed on a flying visit to DC. https://t.co/OJRhC7UvqM
time.com
The two AI safety testing bodies will looexchange employees and share information as AI continues to rapidly evolve.
3
7
27
Exclusive: New research provides a way to measure whether an AI model contains potentially hazardous knowledge, along with a technique for removing the knowledge from an AI system while leaving the rest of the model relatively intact. https://t.co/KTEg0fhDkV
time.com
Researchers have developed new techniques to prevent AI from being used to carry out cyberattacks and deploy bioweapons.
8
11
36
On Tuesday, the House launched a bipartisan Task Force on Artificial Intelligence. I spoke with its members to understand their priorities. https://t.co/H5CUSNTH6W
time.com
TIME spoke to members of the House's new Task Force about their priorities—and finding common ground on AI.
4
1
21
“I can't help but read it as a 2000 year old blog post, arguing with another poster,” says @natfriedman . “It's ancient Substack, and people are beefing with each other, and I think that's just amazing.” My piece on the Vesuvius Challenge winners: https://t.co/J1Bqkmhz9O
time.com
The author seems to be discussing the question: are things that are scarce more pleasurable as a result?
1
5
23
I reported on the NAIRR pilot and the problem it seeks to address. The gap between industry and academia on some measures is pretty stark.
5
6
19
Link to the full piece 👇 (6/6) https://t.co/uPN8oOZfXz
time.com
Predicting when artificial intelligence could outsmart humans is a complicated task, and experts disagree on the answer.
0
0
0
Many questions feed into these wildly varying predictions: How impressive are current AI systems? Will simply scaling them up produce AGI? And a lot could hinge on these forecasts, given the risks that many experts worry AI might pose (5/6)
1
0
1
And if you ask people with a track record of making accurate predictions—the superforecasters—they're even more skeptical (4/6)
1
0
0
If you ask a wider set of AI experts, they're less bullish. While around 10% make similar predictions to scaling hypothesis believers, most think AGI is a couple of decades away at least (3/6)
1
0
0
If you ask scaling hypothesis believers—which includes the leadership of many prominent AI companies—then AGI is probably less than ten years away. A model developed by AI forecasting org Epoch arrives at a similar conclusion (2/6)
1
0
0