1/ Today the UK's AI Safety Institute is open sourcing our safety evaluations platform. We call it "Inspect":
7
80
291
Replies
2/ Inspect is a software library which enables testers to assess specific capabilities of individual models. Released through an open source licence, it is now freely available for the AI community to use.
1
2
10
3/ As a team, we are big believers in the power of open source software - open source software can enable more people to contribute, counteract centralisation of power, improve transparency & reproducibility, give end users more control over their tools and reduce costs for all.
1
1
12
4/ However 'open' vs 'closed' is a complex topic. Large corporations can use 'open' as a business tactic to catch up and compete (e.g. Android vs iOS), and often something important will remain proprietary. See:
1
1
7
5/ Within the AI space there are some remarkable efforts to drive forward openness - consider DeepMind's AlphaFold work or Meta's OpenCatalyst project
2
1
5
6/ I am personally also very attracted to projects that attempt to truly open up the full process of training AI models, for example GPT-NeoX, OLMo or Pythia which all have publicly available training data and OSI-licensed training and evaluation code and model weights.
1
1
14
7/ these projects are truly open source vs just open weight - you can see the data the model is trained on etc. To date these projects have mostly been developed by non-profits like EleutherAI and The Allen Institute for AI.
1
1
10
8/ I'm not sure how common it is for governments to ship open source software, but I'm glad that the UK AI Safety Institute is taking this step.
1
2
13
9/ I'd like to especially thank @fly_upside_down, the legendary creator of ColdFusion who joined AISI and spearheaded this project. Thank you JJ!
1
0
16
@fly_upside_down 10/ One of the structural challenges in AI is the need for coordination across borders and institutions. I believe academia, start-ups, large companies, government and civil society to all play a role, and open source can be a mechanism to coordinate more broadly.
1
0
7
@fly_upside_down 11/ it may be an inconvenient truth, but open source software is currently one of the ways that America and China 'work together' on AI research: https://t.co/OQq1EWqRio - perhaps this points at another mechanism for international collaboration over safety.
2
0
4
@fly_upside_down 12/ this work is a continuation of the work @RishiSunak kicked off with the AI Safety Summit which brought together countries, academia, civil society and the private sector to coordinate around tackling risks from AI so we can enjoy the benefits
1/ I've just left the final session of the first ever global Summit on AI Safety, chaired by @RishiSunak and @michelledonelan. A thread on how it started vs how it’s going:
1
0
7
@soundboy This is very cool, thanks for sharing openly! Wonder if there’s a way to integrate with https://t.co/XZn1kgwGFM to evaluate the million models there or to create a public leaderboard with results of the evals (ex: https://t.co/ZkSmieEPbs) cc @IreneSolaiman @clefourrier
5
2
34
@soundboy This is the way! Truly open: MIT license. Worried about AI safety? Stop fear-mongering. Stop pushing for regulation that (whatever your intentions) will make things worse. Make a positive contribution. Build tools that quantify and then address specific risks. CC @aftfuture
0
0
1
@soundboy Honestly. I think the idea of responsible mediated development to regulate the obvious bilateral unwanted outcomes like cp/deepfakes/ransom calls /potential security risks are a fair point. Safety barriers on a playground don't remove the playground. But some people are INSANE
0
0
0