Chris Meserole
@chrismeserole
Followers
3K
Following
2K
Media
27
Statuses
1K
Executive Director, Frontier Model Forum | Former Director, Brookings A.I. & Emerging Tech Initiative
Washington, DC
Joined November 2007
Information-sharing is central to the mission and purpose of @fmf_org. Today I’m excited to announce that FMF member firms have signed a first-of-its-kind agreement on information-sharing about threats, vulnerabilities, and unique capabilities:
We're pleased to announce that all of FMF's member firms have signed a first-of-its-kind agreement to facilitate information-sharing about threats, vulnerabilities, and capability advances unique to frontier AI:
70
8
51
🧵 NEW ISSUE BRIEF (1/5) As frontier AI capabilities advance, it's crucial to develop robust risk management practices for AI and biology. Our latest issue brief outlines the current landscape of AIxBIO safeguards, highlighting over 20 mitigations and related best practices.
66
1
5
Frontier AI frameworks are an important risk management tool, but they are more appropriate for some risks than others. Our latest technical report examines: 👉 The rationale for focusing on specific risks 👉 How developers define thresholds https://t.co/JZPCLg3zez
frontiermodelforum.org
REPORT SERIES Implementing Frontier AI Frameworks TABLE OF CONTENTS Executive Summary DOWNLOAD Executive Summary Frontier AI frameworks outline methodologies for identifying, managing and mitigating...
1
1
6
We just announced that we piloted a new type of external review arrangement with Amazon. Here's why I think this arrangement is a step in the right direction:
METR worked with @amazon to pilot a new type of external review in which Amazon shared evidence beyond what can be collected via API, including information about training and internal evaluation results with transcripts, to inform our assessment of its AI R&D capabilities.
2
5
36
Frontier AI safety frameworks have emerged as a critical tool for managing potential risks to public safety and security. In a series of technical reports over the coming months, the Frontier Model Forum will examine how these frameworks can be implemented effectively.
1
5
24
The AI Safety Fund is making grants to support research on: - AI and Biosecurity - AI and Cybersecurity - AI Agent Evaluations The first deadline is January 20th. Apply here:
aisfund.org
4
22
134
We’re hiring! The Frontier Model Forum is expanding our team. We’re considering candidates for four key positions in AI safety and security: 🔹 Head of AI Safety 🔹 Head of AI Security 🔹 AI Safety Manager 🔹 AI Security Manager Learn more at:
frontiermodelforum.org
Careers The Frontier Model Forum is a mission-driven organization dedicated to advancing frontier AI safety. We are actively growing our team and hiring for a variety of roles. If you share our...
0
6
20
🛡️ New issue brief: Leveraging #FrontierAI for cybersecurity! From automated threat detection to intelligent incident response, we’ve outlined how advanced AI systems can be used to improve security #cybersecurity #AI
frontiermodelforum.org
Artificial intelligence has long been a cornerstone of cybersecurity operations. From malware detection to network traffic analysis, predictive machine learning models and other narrow AI applicati...
0
2
6
🚀Excited to announce our issue brief on #frontierAI #SafetyFrameworks! Drawn from the Frontier AI Safety Commitments and published frameworks, the brief reflects a preliminary consensus among FMF member firms on the core components of safety frameworks: https://t.co/sMUQodBXgW
frontiermodelforum.org
Safety frameworks have recently emerged as an important tool for frontier AI safety. By specifying capability and/or risk thresholds, safety evaluations and mitigation strategies for frontier AI...
0
3
12
We’ve published a new document, Common Elements of Frontier AI Safety Policies, that describes the emerging practice for AI developer policies that address the Seoul Frontier AI Safety Commitments.
1
18
76
The mission of the Frontier Model Forum is to advance frontier AI safety by identifying best practices, supporting scientific research, and facilitating greater information-sharing. We’re excited to share our early progress in our latest update:
frontiermodelforum.org
The core mission of the Frontier Model Forum is to advance frontier AI safety. By identifying best practices, supporting scientific research, and facilitating greater information-sharing about...
0
1
12
Excited to announce our first issue brief documenting best practices for #FrontierAI safety evaluations! Read more about our recommended best practices for designing and interpreting frontier AI safety evaluations #AISafety #Evaluations
https://t.co/LHW45fnIFx
frontiermodelforum.org
Frontier AI holds enormous promise for society. From renewable energy to personalized medicine, the most advanced AI models and systems have the potential to power breakthroughs that benefit everyo...
0
2
7
Excited to see the announcement today of the UK’s new Systemic AI Safety fund, which will be a great complement to our AI Safety Fund. Very much look forward to all the important research it will support!
We are announcing new grants for research into systemic AI safety. Initially backed by up to £8.5 million, this program will fund researchers to advance the science underpinning AI safety. Read more: https://t.co/QHOLUp3QGR
3
3
24
Welcome @Amazon and @Meta to the @fmf_org! They join founding members @AnthropicAI, @Google, @Microsoft, and @OpenAI in advancing frontier AI safety – from best practice workshops to policymaker education and collaborative research. More here:
frontiermodelforum.org
Making frontier AI models safe is a complex, global challenge. Today, we’re excited to share that Amazon and Meta are joining the Frontier Model Forum and will collaborate alongside founding members...
1
8
32
Congrats to the folks at GDM, this is an important step forward!
🔭 Very happy to share @GoogleDeepMind's exploratory framework to ensure future powerful capabilities from frontier models are detected and mitigated. We're starting with an initial focus on Autonomy, Biosecurity, Cybersecurity, and Machine Learning R&D. 🚀 https://t.co/Foi0aBwTzT
2
1
41
⚖️Measuring training compute appropriately is essential for ensuring that AI safety measures are applied in an effective and proportionate way. See here for a new brief on how we’re approaching the issue:
frontiermodelforum.org
Recent AI safety and governance proposals have leveraged total training compute as a proxy for the capability level of general purpose models. By tying specific safety measures to particular compute...
1
8
20
🚨 The Frontier Model Forum (@fmf_org) is hiring! They're looking for a *Research Science Lead* and *Research Associates*.
frontiermodelforum.org
Careers The Frontier Model Forum is a mission-driven organization dedicated to advancing frontier AI safety. We are actively growing our team and hiring for a variety of roles. If you share our...
1
9
25
Great to see the announcement made today by @NIST to establish the USAISI’s new consortium. The @FMF_org is proud to be a founding member - we're excited to take part in the consortium and look forward to contributing to the shared goal of advancing AI safety.
We’re thrilled to participate in the US’s AI Safety Institute Consortium assembled by @NIST. Ongoing collaboration between government, civil society, and industry is critical to ensure that AI systems are as safe as they are beneficial.
1
0
20
The nerd in me has never felt so seen. Thanks @politico and @markscott88 for such a cool honor!
Honored that our Executive Director @chrismeserole was named @Politico's Wonk of the Week 🤓 thanks @markscott82
2
1
22
The more AI advances, the more we’ll need new efforts at the intersection of philosophy, ethics, and technology. Congrats to @mbrendan1 for his work on @cosmos_inst. Look forward to following along ⤵️
1/ Introducing: The AI Philosophy Landscape Full analysis in my bio, including a sneak preview of Cosmos Institute @cosmos_inst, the philanthropic effort I've been building over the past few months Thread ⬇️
0
1
3