
Max Winga
@maxwinga
Followers
2K
Following
4K
Media
42
Statuses
945
Creator outreach and public education about AI extinction risk @ai_ctrl Previously - AI safety researcher @ConjectureAI, UIUC Physics 2024 DMs open!
London, EN
Joined May 2024
Today I went on the Peter McCormack podcast to discuss AI extinction risk and ControlAI's effort to prevent AI companies from risking our lives by recklessly pursuing superintelligence. Go check it out!.
NEW EPISODE DROPPED. AI safety advocate @MaxWinga joins me to break down the reckless billionaire AI race, the path to superintelligence, and why humanity may only have 5 years left. Watch – Links to full episode in 🧵
2
10
73
RT @GarrisonLovely: Seems like more people should be talking about how the richest companies in the world are explicitly trying to build re….
0
26
0
Already on Episode #3 of the ControlAI podcast, looking forward to many more! . If you're interesting in coming on to talk with us frankly about AI extinction risk, let me know, we're always on the lookout for new guests!.
In this latest episode of our podcast @maxwinga sits down with @DrWakuAI to explore AI security challenges!. They discuss the risks of jailbreaking, how modern AI training methods create inherent vulnerabilities, and the growing threat posed by bad actors.
3
3
23
RT @mccormack_show: Forget sci-fi. The current generation of AI models can already help people build biological weapons more effectively th….
0
5
0
RT @mccormack_show: Meta has a “Superintelligence Team.” Sam Altman admits OpenAI’s goal is AGI. Big Tech is not bluffing. And the worst pa….
0
5
0
RT @mccormack_show: From ChatGPT to human extinction: @maxwinga reveals how superintelligent AI could use cyberattacks, drone swarms, or ev….
0
5
0
RT @NPCollapse: I totally agree with this observation, but think it's even worse than this. It's not just that humanism is lacking in AI,….
0
12
0
RT @ai_ctrl: AI 2027 lead author @DKokotajlo says companies shouldn't be allowed to build superhuman AI systems until they figure out how t….
0
16
0
This video does an amazing job visualizing AI 2027 and demonstrating why so many of us are working so hard to prevent the development of superintelligence before it's too late.
The AI 2027 scenario is terrifying and important. More people should be thinking about how radical change might come over the next few years, how likely it is, and how a sane world would be reacting to it. We want to bring you into the story, and the conversation. Video here:
0
3
39
RT @ai_ctrl: Thanks to @Siliconvos recent video made in partnership with us, over 2,000 citizens have used our tools to contact their repre….
0
7
0
Finally some fantastic news!.
The moratorium just got taken out of the budget bill in a LANDSLIDE vote. 99 to 1. Incredible. Thank you to the lawmakers, the children's advocates, the artists and creators, the voters, the labor groups, and everyone who spoke out against the harmful AI law moratorium.
2
1
54