RobertTrager Profile Banner
Robert Trager Profile
Robert Trager

@RobertTrager

Followers
799
Following
123
Media
5
Statuses
100

Luddite-futurist, concerned anti-alarmist, hoping to live to the full contradiction of the time. Director, Oxford AIGI

Oxford, UK
Joined October 2011
Don't wanna be here? Send us removal request.
@RobertTrager
Robert Trager
6 months
Overall, OAI seems to be offering potential future justifications for actions that aren’t consistent with what their safety standards have been in the past. They haven’t made the case that their new standards are better than their previous - but then, it's only a blog post.
0
0
0
@RobertTrager
Robert Trager
6 months
(c) The report doesn't mention staged deployment at all. In fact, it throws shade on OpenAI's own staged deployment approach to releasing GPT2. Most everyone seems to like staged deployment for large firms when it comes to major advances - what gives?.
1
0
3
@grok
Grok
4 days
Join millions who have switched to Grok.
176
357
3K
@RobertTrager
Robert Trager
6 months
(b) Why come out and say we're not relying on theory anymore? Sometimes we can prove things about systems in advance of deployment and testing. Let's use all our tools - math, toy experiments, secure environments, etc.
1
0
0
@RobertTrager
Robert Trager
6 months
3 thoughts:. (a) Iterative deployment and learning is good if you can do dense incremental deploying - GPT 4.5, 4.51, 4.52,. This is possible sometimes, but impractical sometimes. There will be jumps in impact from some systems to others.
1
0
1
@RobertTrager
Robert Trager
6 months
(2) Rather than proving that systems are safe in advance (aka hard math), they’re going to rely on testing in a secure environment.
1
0
0
@RobertTrager
Robert Trager
6 months
This seems to mean they think each deployment will have only a slightly different impact on the world than the previous deployment. Therefore, they argue, it's OK to move a bit in the direction of deploy and see what happens.
1
0
0
@RobertTrager
Robert Trager
6 months
🧵OpenAI's new post “How we think about safety and alignment” is mostly usual fare, but two things stand out:. (1) They don't believe in discontinuous impacts of AI systems anymore; they will "embrace uncertainty" and learn from iterative deployment.
Tweet card summary image
openai.com
The mission of OpenAI is to ensure artificial general intelligence (AGI) benefits all of humanity. Safety—the practice of enabling AI’s positive impacts by mitigating the negative ones—is thus core...
2
0
3
@RobertTrager
Robert Trager
7 months
16/16 Ranjit Lall, @benharack, @JuliaCMorse, @n_miailhe, @Scott_R_Singer, @mattsheehan88, Max Stauffer, @yi_zeng, @JoslynBarnhart, @ImaneBello, Xue Lan, @OliverEGuest, Duncan Cass-Beggs, @chuanyinglu, Sumaya Nur Adan, @Manderljung, Claire Dennis.
1
0
2
@RobertTrager
Robert Trager
7 months
15/16 THANK YOU to everyone who contributed! @_LuciaVelasco, @Charles_Mrt, @HZoete, @RobertTrager, Duncan Snidal, @bmgarfinkel, Kwan Yee Ng, @HaydnBelfield, Don Wallace, @Yoshua_Bengio, Benjamin Prud'homme, Brian Tse, @r0xanaradu.
1
0
5
@RobertTrager
Robert Trager
7 months
14/16 A multi-year roadmap would guide the summit series, regularly updated with expert input to reflect technological developments and emerging challenges. Full Report:
Tweet card summary image
oxfordmartin.ox.ac.uk
The AI Summit series – initiated at Bletchley Park in 2023 and continuing through Seoul in 2024 and Paris in 2025 – has become a distinct forum for…
1
1
3
@RobertTrager
Robert Trager
7 months
13/16 For hosting selection: Countries would bid two years in advance, with votes from Track 1 participants. Regional rotation would be encouraged when feasible.
1
0
1
@RobertTrager
Robert Trager
7 months
12/16 A "troika" system would have three consecutive hosts collaborate on planning and agenda-setting. This ensures smooth transitions between summits.
1
0
1
@RobertTrager
Robert Trager
7 months
11/16 On timing: Annual summits would be supplemented by interim technical meetings. This helps track progress on commitments and responds to rapid AI developments.
1
0
1
@RobertTrager
Robert Trager
7 months
10/16 Leading AI labs could join Track 1 as observers, recognizing their crucial role in advanced AI development.
1
0
1
@RobertTrager
Robert Trager
7 months
9/16 Track 1 participation would be based on specific criteria: national AI capabilities, jurisdiction over frontier AI development, concentration of technical talent, and regulatory frameworks.
1
0
2
@RobertTrager
Robert Trager
7 months
8/16 Track 2 serves as a broader platform where diverse nations explore AI's public benefits and societal implications. This balances the need for focused expertise with inclusive dialogue.
1
0
2
@RobertTrager
Robert Trager
7 months
7/16 We propose a two-track structure. Track 1 focuses on advanced AI governance, bringing together nations leading in AI development and regulation.
1
0
2
@RobertTrager
Robert Trager
7 months
6/16 We examine core design elements for future summits: hosting arrangements, secretariat format, participant selection, agenda setting, and meeting frequency.
1
0
2
@RobertTrager
Robert Trager
7 months
5/16 Central recommendation: Maintain the series’ focus on advanced AI governance. This addresses a gap in the international ecosystem where no other forum specifically tackles frontier AI systems.
1
1
3
@RobertTrager
Robert Trager
7 months
4/16 This improvised nature was initially a strength, allowing quick responses to rapid AI developments. But as the series matures, some structure may help maintain its effectiveness.
Tweet media one
1
0
3