Olaf Thielke ⏹️
@OlafCodeCoach
Followers
85
Following
4K
Media
40
Statuses
835
Code Coach, Aspirant Stoic, Freethinker. Loves to add value.
Auckland, New Zealand
Joined May 2020
No coordination on titles, yet they line up perfectly: “AI: Unexplainable, Unpredictable, Uncontrollable. If Anyone Builds It, Everyone Dies.”
20
22
115
@niccruzpatane 'If I could, I would certainly slow down AI and robotics, but I can’t' -- says the world's richest, most influential, most respected, and most capable man, @elonmusk . Really? IMHO, there are literally hundreds of effective things he could do to slow down dangerous AI
7
3
52
Nate Soares, co-author of the new book “If Anyone Builds It, Everyone Dies,” speaks with George Stephanopoulos about the potential dangers of artificial superintelligence.
24
24
42
Yes. We have to worry about AI and robotics. Some questions:
278
419
2K
A great explanation of how close we might be to recursive self improvement (intelligence explosion) and why it matters. This is an explicit goal of AI companies, and could quickly lead to a pace of development that is near impossible to manage. People need to act now by
In Anthropic’s system card for their newest AI, Claude Opus 4.5, they say that confidently ruling out that their AI R&D-4 dangerous capability threshold has been crossed is “becoming increasingly difficult”. A thread on what this capability would mean 🧵
1
2
7
As a father, I'm so thankful to congress for keeping AI less regulated than turkeys – Happy Thanksgiving!
10
16
100
🎥 Watch: MIRI CEO Malo Bourgon's opening statement before a Canadian House of Commons committee. @m_bourgon argues that superintelligence poses a risk of human extinction, but that this is not inevitable. We can kickstart a conversation that makes it possible to avert this.
7
22
73
"If a bridge downtown had a 25% chance of collapsing, we wouldn't be like, 'well think of the benefits of having the bridge open'. We'd be like, 'Shut it down. Build a better bridge.'" Highlights from "If Anyone Builds It, Everyone Dies" co-author @so8res on the FLI Podcast:
2
5
17
In 2013, @ESYudkowsky predicted that interpretability would be difficult enough that the internal workings of AI systems would not be understood by the companies building them. That may sound obvious today, but at the time, even @DarioAmodei disagreed.
6
7
136
The race to Superintelligent AI, which many eminent AI scientists say would murder all humans.
0
0
2
If Trump bails out AI, he will be subsidizing—with taxpayer money—the development of a technology that aims to someday take away your job. And your children’s jobs. Are you happy about that?
@GaryMarcus Footing the bill to kill jobs.
73
395
2K
@kevinnbass Have your missed the last 20+ years of AI safety experts warning against the catastrophic risks of building agents smarter than humans? Or the fact that every single CEO of a frontier AI company had warned repeatedly that AI could drive us extinct? Are they all wrong?
22
9
303
Peter Thiel wants us to be terrified of this man and not of his Palantir project, which enables real-time government monitoring at scale.
12
166
1K
It's OK to want your kids to do meaningful, challenging jobs that help other people survive & prosper -- rather than being passive, powerless, pleasure-seeking wards of an AI welfare state.
14
20
192
Why is a $100 million super PAC making me their first target? I’m a former computer engineer. I led the fight for a first-in-the-nation AI safety bill in NY. And now, I’m running for Congress to make sure AI works for you, not against you.
1
22
140
@viemccoy Same theory that predicts ASI killing everyone, successfully predicted (~decades ahead) things that AIs already did or tried to do: - escape confinement and lie about it - subvert safety protocols placed upon it - hack its reward - lie despite being told to not do so - acquire
1
5
31
The book everyone in the media is reading. “Absolutely compulsory,” says ABC Chair Kim Williams. I agree.
186
119
585
Nate Soares (@So8res) is someone worth listening to. He is a calm professional explaining things in approachable ways. The lack of control and understanding is the main problem with advanced models. We must stop developing them until our wisdom catches up.
AI researcher Nate Soares says that despite AIs getting more powerful, we're not on track to be able to control them. "This is sort of a worst case situation. I never wanted to be here."
0
5
20