FLI_org Profile Banner
Future of Life Institute Profile
Future of Life Institute

@FLI_org

Followers
88K
Following
4K
Media
2K
Statuses
7K

We work on reducing extreme risks and steering transformative technologies to benefit humanity. RT /=/ endorsement. Bluesky: https://t.co/IjvxJtEEeQ

Campbell, CA
Joined June 2014
Don't wanna be here? Send us removal request.
@FLI_org
Future of Life Institute
15 days
Why talk to your kid when a bot can? This Christmas, sit back and let AI raise them:
26
410
2K
@FLI_org
Future of Life Institute
15 hours
Applications are due today! β²οΈπŸ‘‡ https://t.co/1AqXeRFRiD
Tweet card summary image
futureoflife.org
The Vitalik Buterin Postdoctoral Fellowship in AI Existential Safety is designed to support promising researchers for postdoctoral appointments who plan to work on AI existential safety research.
@FLI_org
Future of Life Institute
6 days
βŒ›πŸ« Less than one week left to apply for our technical postdoctoral fellowships in AI existential safety! βŒ› Fellows receive: πŸ’° An annual $80,000 stipend at universities in the US, UK and Canada. ✈️ A $10,000 fund that can be used for research-related expenses such as travel
0
1
3
@FLI_org
Future of Life Institute
4 days
πŸ’₯As we jump into the new year, check out our final newsletter of 2025, covering: πŸ—‚οΈ Winter 2025 AI Safety Index πŸ›οΈ NY's new RAISE Act πŸ“ White House preemption Executive Order 🎨 Results from our Keep the Future Human creative contest And more. πŸ”—Read it now in the replies:
2
3
6
@FLI_org
Future of Life Institute
6 days
βŒ›πŸ« Less than one week left to apply for our technical postdoctoral fellowships in AI existential safety! βŒ› Fellows receive: πŸ’° An annual $80,000 stipend at universities in the US, UK and Canada. ✈️ A $10,000 fund that can be used for research-related expenses such as travel
1
2
5
@FLI_org
Future of Life Institute
7 days
"It's completely ridiculous that we should let tech companies talk openly about building a new replacement species and not have to first demonstrate to some regulators that this is safe." -FLI's @Tegmark talking to @CNN at @WebSummit:
3
14
48
@FLI_org
Future of Life Institute
8 days
@NPCollapse Watch Connor Leahy's (@NPCollapse) full interview with FLI Podcast host @GusDocker:
0
0
3
@FLI_org
Future of Life Institute
8 days
"The moment you have a system - one system - as smart as a human, you can instantly scale it up to 1,000, 1,000,000 X, and you immediately will have something that is vastly smarter than humanity and that can improve itself, get more power, develop new technology." -@NPCollapse:
2
6
20
@FLI_org
Future of Life Institute
13 days
@UofT @DavidDuvenaud @GusDocker Listen in full here or on your favourite podcast player:
0
2
18
@FLI_org
Future of Life Institute
13 days
"Let me say it loud and clear here that I think that in the post-AGI world it is gonna be extremely alien and so different that if we could avoid crossing that threshold, I think we should." -@UofT machine learning professor @DavidDuvenaud on the FLI Podcast with @GusDocker β¬‡οΈπŸ”—
3
3
19
@FLI_org
Future of Life Institute
14 days
@NPCollapse The full conversation between @NPCollapse and @GusDocker:
0
1
6
@FLI_org
Future of Life Institute
14 days
@NPCollapse More highlights:
0
1
3
@FLI_org
Future of Life Institute
14 days
"People just think [superintelligence] is just gonna be like ChatGPT but a bit cleverer, and this is just not what we're talking about." From @NPCollapse's FLI Podcast episode on why humanity risks extinction from AGI πŸ“ΊπŸ‘‡πŸ”—
7
8
33
@ednewtonrex
Ed Newton-Rex
14 days
This is a great ad. Keep AI chatbots away from children.
@FLI_org
Future of Life Institute
15 days
Why talk to your kid when a bot can? This Christmas, sit back and let AI raise them:
23
3K
19K
@AISafetyMemes
AI Notkilleveryoneism Memes ⏸️
18 days
IMPORTANT POINT: Even ***if*** we solve alignment and build "obedient superintelligences" we could still get a super dangerous world where everybody's "obedient slave superheroes" are fighting We may be ants trampled by elephants fighting And if they AREN'T obedient...
@FLI_org
Future of Life Institute
21 days
πŸ€– ⚠️ "If we have obedient superintelligences that just do what people tell them to do, that is probably a super dangerous world also." -@AnthonyNAguirre on the FLI Podcast with host @GusDocker. πŸ“Ί Watch more in the replies:
28
7
122
@FLI_org
Future of Life Institute
19 days
5. The Button by Vaibhav Jain. What if the people building AGI don't want to build it? A short story told from the perspective of an AI alignment researcher at a fictional leading AI lab. She's part of the race toward artificial general intelligenceβ€”and she is terrified of
2
1
4
@FLI_org
Future of Life Institute
19 days
4. The Choice Before Us by Nick Shapiro. The Choice Before Us is an interactive narrative game where players run an AI startup and confront the same escalating pressures described in Keep The Future Human. As they unlock extraordinary breakthroughs for humanity, rising autonomy,
1
0
1
@FLI_org
Future of Life Institute
19 days
3. Will AI Destroy Humanity? by Vin Sixsmith and Renzo Stadhouder. A 3D animated walkthrough exploring the dangerous AGI race and how we can choose a safer path. Features visual storytelling that makes complex AI safety concepts accessible, covering the four measures to prevent
1
0
2