Cassidy Nelson
@cassidyknelson
Followers
542
Following
239
Media
0
Statuses
62
Director of Biosecurity Policy at the Centre for Long-Term Resilience
London, England
Joined January 2013
Read our full piece here:
time.com
The AI security ecosystem is overly focused on preventing pandemic-level attacks, creating a dangerous blind spot.
0
0
1
7/ This isn't a call to look away from extreme biological risks. It's a call for a posture that can actually detect and prevent them.
1
0
1
6/ And we need public-private partnerships that merge classified intelligence with proprietary AI data. Cross-cleared personnel from companies and government need a shared space to spot threats and capability horizons that neither side can see alone.
1
0
1
5/ We need distinct safety tests and reporting across the CBRNe spectrum, not only bio ones.
1
0
3
4/ Someone exploring AI for a chemical attack today may be building capability for something far worse tomorrow. If our safety architecture only triggers for the apocalypse, we'll miss the signals leading up to it.
1
0
2
3/ State actors. Terrorist organisations. Skilled, well-resourced individuals. Chemical and explosive threats. These aren't distractions from catastrophic risk preparedness β they're the early warning system for it.
1
0
2
2/ The current approach is overly focused on a single scenario: a lone novice using AI to engineer a pandemic pathogen. This matters, but it's not the only threat β and it's not even the best way to prevent engineered pandemics.
1
0
2
1/ The AI safety ecosystem has mobilised around AI-enabled biological attacks. That's the right instinct. But we've converged on too narrow a model of what that threat looks like β and it's creating dangerous blind spots. New piece in TIME with @rebeccahersman π
1
4
11
6/ And we need public-private partnerships that merge classified intelligence with proprietary AI data. Cross-cleared personnel from companies and government need a shared space to spot threats and capability horizons that neither side can see alone.
0
0
0
5/ We need distinct safety tests and reporting across the CBRNe spectrum, not only bio ones.
1
0
0
4/ Someone exploring AI for a chemical attack today may be building capability for something far worse tomorrow. If our safety architecture only triggers for the apocalypse, we'll miss the signals leading up to it.
1
0
0
3/ State actors. Terrorist organisations. Skilled, well-resourced individuals. Chemical and explosive threats. These aren't distractions from catastrophic risk preparedness β they're the early warning system for it.
1
0
0
2/ The current approach is overly focused on a single scenario: a lone novice using AI to engineer a pandemic pathogen. This matters, but it's not the only threat β and it's not even the best way to prevent engineered pandemics.
1
0
0
"While it is good that companies are focusing on pandemics, the ecosystem is overly focused on a single 'lone wolf virus terrorist' model as the most serious threat. Significantly less attention is being paid to all other risk scenarios." Rebecca Hersman and Cassidy Nelson: The
time.com
The AI security ecosystem is overly focused on preventing pandemic-level attacks, creating a dangerous blind spot.
5
12
39
"The biosecurity landscape is evolving rapidly. The BWC risks falling even further behind. A single lapse in vigilance could spark consequences that reverberate across continents and generations."
1
0
2
Rare speech at the BWC capturing the frustration of these efforts that has hit a precipice. "If we take this path, mechanisms will one day β at best β be born old. This delay is not harmless. It comes at a cost, a huge cost." - Ambassador Frederico S Duque Estrada Meyer (Brazil)
1
0
2
A new @ScienceMagazine article from 30+ leading international scientists including several @JCVenterInst researchers, examines the potential dangers of building βmirror lifeβ β organisms composed entirely of mirror-image biological molecules.
5
21
58
Foundations: Why Britain Has Stagnated. A new essay by @bswud, @SCP_Hughes & me. Why the UK's ban on investment in housing, infrastructure and energy is not just a problem. It is *the* problem. And how fixing it is the defining task of our generation. https://t.co/N6McRZCOlx
ukfoundations.co
Why Britain has stagnated
243
1K
4K
*New CLTR and @RANDEurope Collaboration* We are excited to announce that we are working with RAND Europe on developing a comprehensive risk index for AI-enabled biological tools to assist policymakers in assessing evolving biosecurity threats. https://t.co/BNcLUjqzjH
rand.org
The Centre for Long-Term Resilience (CLTR) and RAND Europe developed the first flexible framework designed to assess artificial intelligence-enhanced tools based on their capabilities, potential for...
0
9
37