Future of Life Institute
@FLI_org
Followers
88K
Following
4K
Media
2K
Statuses
7K
We work on reducing extreme risks and steering transformative technologies to benefit humanity. RT /=/ endorsement. Bluesky: https://t.co/IjvxJtEEeQ
Campbell, CA
Joined June 2014
Why talk to your kid when a bot can? This Christmas, sit back and let AI raise them:
26
410
2K
Applications are due today! β²οΈπ https://t.co/1AqXeRFRiD
futureoflife.org
The Vitalik Buterin Postdoctoral Fellowship in AI Existential Safety is designed to support promising researchers for postdoctoral appointments who plan to work on AI existential safety research.
βπ« Less than one week left to apply for our technical postdoctoral fellowships in AI existential safety! β Fellows receive: π° An annual $80,000 stipend at universities in the US, UK and Canada. βοΈ A $10,000 fund that can be used for research-related expenses such as travel
0
1
3
π₯As we jump into the new year, check out our final newsletter of 2025, covering: ποΈ Winter 2025 AI Safety Index ποΈ NY's new RAISE Act π White House preemption Executive Order π¨ Results from our Keep the Future Human creative contest And more. πRead it now in the replies:
2
3
6
βπ« Less than one week left to apply for our technical postdoctoral fellowships in AI existential safety! β Fellows receive: π° An annual $80,000 stipend at universities in the US, UK and Canada. βοΈ A $10,000 fund that can be used for research-related expenses such as travel
1
2
5
"It's completely ridiculous that we should let tech companies talk openly about building a new replacement species and not have to first demonstrate to some regulators that this is safe." -FLI's @Tegmark talking to @CNN at @WebSummit:
3
14
48
0
0
3
"The moment you have a system - one system - as smart as a human, you can instantly scale it up to 1,000, 1,000,000 X, and you immediately will have something that is vastly smarter than humanity and that can improve itself, get more power, develop new technology." -@NPCollapse:
2
6
20
0
2
18
"Let me say it loud and clear here that I think that in the post-AGI world it is gonna be extremely alien and so different that if we could avoid crossing that threshold, I think we should." -@UofT machine learning professor @DavidDuvenaud on the FLI Podcast with @GusDocker β¬οΈπ
3
3
19
0
1
6
"People just think [superintelligence] is just gonna be like ChatGPT but a bit cleverer, and this is just not what we're talking about." From @NPCollapse's FLI Podcast episode on why humanity risks extinction from AGI πΊππ
7
8
33
IMPORTANT POINT: Even ***if*** we solve alignment and build "obedient superintelligences" we could still get a super dangerous world where everybody's "obedient slave superheroes" are fighting We may be ants trampled by elephants fighting And if they AREN'T obedient...
π€ β οΈ "If we have obedient superintelligences that just do what people tell them to do, that is probably a super dangerous world also." -@AnthonyNAguirre on the FLI Podcast with host @GusDocker. πΊ Watch more in the replies:
28
7
122
ππ
Explore all of the winning projects here:
keepthefuturehuman.ai
$100,000+ in prizes for creative digital media that engages with the essay's key ideas, helps them to reach a wider range of people, and motivates action in the real world.
0
0
1
5. The Button by Vaibhav Jain. What if the people building AGI don't want to build it? A short story told from the perspective of an AI alignment researcher at a fictional leading AI lab. She's part of the race toward artificial general intelligenceβand she is terrified of
2
1
4
4. The Choice Before Us by Nick Shapiro. The Choice Before Us is an interactive narrative game where players run an AI startup and confront the same escalating pressures described in Keep The Future Human. As they unlock extraordinary breakthroughs for humanity, rising autonomy,
1
0
1
3. Will AI Destroy Humanity? by Vin Sixsmith and Renzo Stadhouder. A 3D animated walkthrough exploring the dangerous AGI race and how we can choose a safer path. Features visual storytelling that makes complex AI safety concepts accessible, covering the four measures to prevent
1
0
2