@jpineau1
Joelle Pineau
1 year
How many of the signatories have used their expertise, their influence, their resources to rigourously analyze and expose the risks of current AI systems, and to build solutions that address real harms? That is what we need. Not letters, headlines and inflammatory statements.
@ai_risks
Center for AI Safety
1 year
We’ve released a statement on the risk of extinction from AI. Signatories include: - Three Turing Award winners - Authors of the standard textbooks on AI/DL/RL - CEOs and Execs from OpenAI, Microsoft, Google, Google DeepMind, Anthropic - Many more
160
397
1K
26
74
499

Replies

@scottniekum
Scott Niekum
1 year
@jpineau1 A lot, actually. From a glance at the list, just sticking to people I know personally that do practical safety research: Me, Phil Thomas, Anca Dragan, Jacob Steinhardt, Vincent Conitzer, Scott Aaronson, Dylan Hadfield-Menell, Sam Bowman, David Krueger, Been Kim, + many more.
1
1
70
@jpineau1
Joelle Pineau
1 year
@scottniekum Indeed, there are people in the list doing really good work on this. This is what we should amplify!
3
0
14
@StefanFSchubert
Stefan Schubert
1 year
@jpineau1 The fact that solutions that address real harms is good isn't a good argument against this letter
1
0
18
@AImpactfuLeader
Sonia Sarao
1 year
@jpineau1 Amen! Thank you for being a sane voice! πŸ™πŸ½
@AImpactfuLeader
Sonia Sarao
1 year
πŸš€ Latest Feeding Season of Fear: AI πŸ€– Let's be real: AGI (Artificial General Intelligence) is on the horizon. It's not a matter of 'if' but 'when'. Our only ticket out? Extinction in the next few years, but let's not go there! πŸ˜… With AGI, we're not just opening Pandora's…
3
1
3
0
0
0
@Wiber
Elias Moosman
1 year
@jpineau1 Would you agree there problem with AI isn’t technical? Seems it’s the complexity of life and intelligence itself. More like a natural force. So far papers and conversations like this is all there is - it’s ephemeral and only builds on itself in a organic linear way.
1
0
0
@labenz
Nathan Labenz
1 year
@jpineau1 @ai_risks , led by @DanHendrycks , are true leaders in this space! Many signatories - including those from OpenAI, Anthropic, & DeepMind - have invested serious resources to control current systems I believe the results they are seeing from those efforts inspire this statement!
0
0
16
@azeem
Azeem Azhar
1 year
@jpineau1 πŸ’―
0
0
0
@QRJ211
QRJ21
1 year
@jpineau1 The same question can be asked to the non-signatories ?
0
0
0
@MilitantHobo
MilitantHobo
1 year
@jpineau1 Imagine that some times in the 1930's there was a bunch of physicists and activists concerned that nuclear research might lead to weapons that might threaten our survival as a species.
1
1
2
@VieiraMike
Mike Vieira
1 year
@jpineau1 The statement is a requisite precursor.
0
0
0
@willdye
Bil Dye
1 year
@jpineau1 You're concerns are valid, but I disagree with the overall tone of your tweet. It's hard to get a high-powered group to even talk. Yes, we need specific solutions. But to get there, we also need 1st steps - gain contact, agree to talk, agree to issue a vague joint statement, etc.
0
0
0
@AmauryRodguez
Amaury Rodriguez
1 year
0
0
0
@JMannhart
Jonathan Mannhart πŸ”
1 year
@jpineau1 Why not both? What is inflammatory about this letter? It seems genuinely good to me. The topic needs way more attention and resources!
0
0
4
@ai_in_check
ai_in_check
1 year
@jpineau1 if you actually check the list: a lot of them
0
0
0
@aisafetyfirst
AI Safety First!
1 year
@jpineau1 Give the signatories time to "put their money where their mouth is". Geoffrey Hinton, by quitting a very well-paying job, is one of them already walking the talk. We expect no less of ALL signatories, agree @jpineau1 good to keep track of the signatories' actual efforts.
0
0
0
@chupvl
Vladimir Chupakhin
1 year
@jpineau1 How is it connected with the real impact of AI on society? Influence is one, negative impact is another.
0
0
0