AVERI
@AVERIorg
Followers
417
Following
6
Media
0
Statuses
10
Nonprofit working to make third-party auditing of frontier AI effective and universal.
Washington, DC
Joined April 2025
AVERI’s goal is to make third-party auditing of frontier AI effective and universal. Today, together with coauthors at dozens of organizations we set out our vision in "Frontier AI Auditing" — a framework for rigorous third-party audits of safety and security practices at
5
25
137
We also lay out concrete next steps — from clarifying insurance coverage, to auditor accreditation, to R&D investments that make high-assurance auditing technically feasible. Read the paper and more about the AI Verification and Evaluation Research Institute (AVERI) here:
averi.org
A comprehensive framework for independent evaluation of frontier AI systems, mapping access requirements to systemic risks.
1
3
23
Done well, frontier AI auditing: ✅ Gives enterprises and governments confidence to adopt ✅ Lets responsible developers differentiate their products ✅ Helps insurers price risk accurately ✅ Enables confident deployment into high-stakes sectors
1
0
18
Frontier AI audits should at least cover four risk categories: - Intentional misuse - Unintended system behavior - Information security - Emergent social phenomena Comprehensive coverage means fewer surprises — for developers, deployers, and users alike.
1
0
18
This isn't a one-size-fits-all approach. We propose "AI Assurance Levels" (AALs) — a framework for calibrating how much confidence stakeholders can place in audit findings. Different contexts need different levels: lighter-weight checks for some deployments, deeper assurance for
1
0
18
We drew lessons from financial auditing, aviation safety, penetration testing, consumer product certification, and more. Each shows what works: independence safeguards, defence in depth, adversarial testing, continuous monitoring. Each also shows what fails: conflicts of
1
0
19
To do this meaningfully, auditors need deep access to non-public information: model internals, training processes, compute allocation, governance records, staff interviews. This is standard practice in other industries, usually built after something goes wrong. With AI, we have
1
0
20
Our vision: an ecosystem of independent auditors who can verify companies' safety claims, evaluate their systems against relevant standards, and assess organizational practices, not just individual models. Risk doesn't come from models alone, it emerges from the interaction of
1
2
31
Today's paper was written by 30+ researchers from numerous organizations, including authors with very different perspectives on AI policy
1
0
21
AI is rapidly becoming critical societal infrastructure — but enterprises, governments, and insurers need reliable ways to verify that safety and security claims hold up, and that key risks are being mitigated. Right now, too much comes down to "vibes" and trust in company
2
0
21