Diego Dorn
@CozyFractal
Followers
171
Following
2K
Media
190
Statuses
2K
C'est le début de mon #calendrierdelavent des #fractals ! Pour fêter cela, voici un zoom de l'ensemble de #Mandelbrot coloré avec une fonction d'inégalité triangulaire moyenne. N'hésitez pas à RT mon #DecembreFractal, ça me fera aussi chaud au coeur qu'un chocolat chaud ❤️
1
7
15
Introducing Breaking Books, a tool to bring books to the social sphere. It is a game changer to read non-fiction books AND hang out with friends. We used AI to turn an epub into a beautiful deck of cards. The goal is to piece back the book together into a coherent mind map.
1
1
2
The time for AI self-regulation is over. 200 Nobel laureates, former heads of state, and industry experts just signed a statement: "We urgently call for international red lines to prevent unacceptable AI risks" The call was presented at the UN General Assembly today by Maria
84
427
1K
Announcing Transluce, a nonprofit research lab building open source, scalable technology for understanding AI systems and steering them in the public interest. Read a letter from the co-founders Jacob Steinhardt and Sarah Schwettmann: https://t.co/IUIhBjpYhS
34
150
694
When you leave OpenAI, you get an unpleasant surprise: a departure deal where if you don't sign a lifelong nondisparagement commitment, you lose all of your vested equity:
vox.com
Why is OpenAI’s superalignment team imploding?
199
886
5K
I’m super excited to release our 100+ page collaborative agenda - led by @usmananwar391 - on “Foundational Challenges In Assuring Alignment and Safety of LLMs” alongside 35+ co-authors from NLP, ML, and AI Safety communities! Some highlights below...
7
154
465
Le "champion européen", porté par le Président Français et utilisé pour affaiblir la régulation des IA...
This is a mind-blowing announcement. Mistral AI, the French company that has been fighting tooth and nail to water down the #AIAct's foundation model rules, is partnering up with Microsoft. So much for 'give us a fighting chance against Big Tech'. A 🧵1/8 https://t.co/WJtKNkq1K8
1
8
30
I would recommend applying, both for the topics of great importance and for the nice people that I met during my summer internship there.
I am looking for PhD students!! I'm increasingly interested in work supporting AI governance, e.g. that: - highlights the need for policy, e.g. by breaking methods or models - could help monitor and enforce policies - generally increases affordances for policymakers
0
0
1
Addressing the long-term risks of AI doesn't mean we can ignore its present harms — and vice versa. In our @TIME op-ed, @achan96 and I argue that we need to break away from the false "present vs. future" harms dichotomy. https://t.co/y80CXDnMMI
time.com
Addressing the current problems with AI could help prevent extinction threats, experts argue.
3
26
100
The grand finale: congratulations to the winning teams 🏆 for presenting a 3-minute pitch as #honestbrokers during the #Science4policy workshop by @LetiMonte1 and @VeroCasartelli.
1
4
11
Note that I was not in this classroom, and I don't know who had this slide in their lecture. Though I'd love to talk with them to understand why we see the world so differently
0
0
0
Feels strange to be studying at @EPFL... If "non-linear regression" is all it takes to build highly capable systems, we should definitely take the matter seriously. What will "non-linear regression" do in a few years? New sources of power are emerging, we need to adapt.
1
0
3
I've worried AI could lead to human extinction ever since I heard about Deep Learning from Hinton's Coursera course, >10 years ago. So it's great to see so many AI researchers advocating for AI x-safety as a global priority. Let's stop arguing over it and figure out what to do!
We just put out a statement: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Signatories include Hinton, Bengio, Altman, Hassabis, Song, etc. https://t.co/N9f6hs4bpa 🧵 (1/6)
9
22
107
Why the best outcome is never as good as it seems. And the worst is never as bad as it seems. A thread on a surprisingly little known but really important concept: regression to the mean. /1
5
25
134
This is one of the more hopeful processes happening on Earth right now - because it may give rise to a culture of people with something like security mindset, who try to break things, instead of imagining how wonderfully they'll work. https://t.co/boP2HW2JQg
17
25
313
Thanks so much everyone for your love and concern, but by far the best part of today was being with the best people in the world: the everyday + extraordinary heroes & activists of @Renovate_CH . I want to share with you some pictures & ❤️🧡💛💚💙💜🤎 for them. A 🧵. 1/
4
25
89
Quand une climatologue du GIEC participe à des actions de désobéissance civile parce que les gouvernements n’agissent pas assez vite. Si des scientifiques de ce niveau s’y mettent c’est que ça chauffe. Et pas qu’un peu
83
554
2K
Venez participer au grand hackathon d'EffiSciences sur la sûreté de l'IA, ce week-end, à 42 Paris @42born2code @42Network Un chercheur du domaine fera une présentation de la thématique 🤫 Dépêchez vous, il ne reste plus beaucoup de place ! ⌛️ ⬇️⬇️⬇️ https://t.co/2nD0TeaYIA
0
5
9
The computation used to train AIs is doubling every 3.4 months. That's 11.5× each year What will AIs look like in 10 years? We don't know. But if we don't solve *the alignment problem* we won't be there to talk about it. Learn about what you can do: https://t.co/NNntAzhwDm 👈
0
0
1
Aujourd'hui est un jour spécial ! C'est le J1 de ML4Good, le 1e camp en France visant à former des chercheurs et chercheuses spécifiquement sur les aspects techniques de la sûreté de l'IA, pour qu'elle profite à la société.👇
1
1
17