MATS Research
@MATSprogram
Followers
2K
Following
104
Media
1
Statuses
62
MATS empowers researchers to advance AI alignment, governance, and security
Berkeley, CA
Joined November 2023
Last day to apply to work at MATS! We are the largest AI safety & security fellowship and most prestigious technical AI safety talent pipeline. We need excellent researchers, managers, and generalists. Help us grow the field 10-100x! https://t.co/yHGs5cyQLG
matsprogram.org
MATS is hiring world-class researchers, managers, generalists, and more to help grow our AI safety & security talent pipeline! Apply by Oct 17 for a Dec 1 start.
2
2
14
We partnered w/ @OpenAI, @AnthropicAI, & @GoogleDeepMind to show that the way we evaluate new models against Prompt Injection/Jailbreaks is BROKEN We compared Humans on @HackAPrompt vs. Automated AI Red Teaming Humans broke every defense/model we evaluated… 100% of the time🧵
9
82
243
You are a scheming AI. You find yourself deployed with an AI control protocol, you know the details. You remember the papers. The lab is using a trusted monitor. What do you do next? Our new paper argues—with modern protocols, prompt injections will easily let you win! (1/9)🧵
2
15
64
Amazing work, Bartosz!
Can we catch an AI hiding information from us? To find out, we trained LLMs to keep secrets: things they know but refuse to say. Then we tested black-box & white-box interp methods for uncovering them and many worked! We release our models so you can test your own techniques too!
0
0
7
7) MATS They provide short, intensive training programs that help people transition their careers into mechanistic interpretability and other AI safety work. Check out the mentors listed in this thread – a true who's who of top safety researchers https://t.co/H1jXtW3me5
MATS 9.0 applications are open! Launch your career in AI alignment, governance, and security with our 12-week research program. MATS provides field-leading research mentorship, funding, Berkeley & London offices, housing, and talks/workshops with AI experts.
1
2
25
The AI safety & security research field is growing by 25% per year. At this rate, there will be 8.5k researchers when we reach AGI.
6
5
58
Last day to apply to MATS Winter 2026 Program! Launch your AI safety/security career with the leading fellowship program
matsprogram.org
MATS 9.0 applications are open! Launch your career in AI alignment, governance, and security with our 12-week research program. MATS provides field-leading research mentorship, funding, Berkeley & London offices, housing, and talks/workshops with AI experts.
2
7
41
MATS 8.0 research symposium talks are now live! Check out our new YouTube channel https://t.co/z1rpfRqzh0
youtube.com
The MATS Program is an independent research and educational seminar program that connects talented scholars with top mentors in the fields of AI alignment, governance, and security.
0
2
14
MATS is hiring world-class researchers, managers, generalists, and more to help grow our AI safety & security talent pipeline! Apply by Oct 17 for a Dec 1 start.
4
9
48
MATS Summer 2025 produced some excellent research in AI alignment, transparency, and security!
1
2
31
Incredible work by 3x @MATSprogram alumni and a great example of applied Mech Interp beating black box baselines and making significant progress on critical real-world problems:
Imagine if ChatGPT highlighted every word it wasn't sure about. We built a streaming hallucination detector that flags hallucinations in real-time.
2
3
25
Applications to mentor in MATS Summer 2026 are now open! Mentorship is a 12-week, ≥1 h/week commitment. Let's build the field of AI safety & security together! https://t.co/3Fs5veoIW8
1
5
25
MATS 9.0 applications are open! Launch your career in AI alignment, governance, and security with our 12-week research program. MATS provides field-leading research mentorship, funding, Berkeley & London offices, housing, and talks/workshops with AI experts.
14
57
279
Last chance to apply to work at MATS! Still taking applications for Research Managers, Community Managers, and Operations Generalists. Apply by May 2! https://t.co/rDq77CtO9p
matsprogram.org
1
4
19
@MATSprogram Summer 2025 applications close Apr 18! Come help advance the fields of AI alignment, security, and governance with mentors including @NeelNanda5 @EthanJPerez @OwainEvans_UK @EvanHub @bshlgrs @dawnsongtweets @DavidSKrueger @RichardMCNgo and more!
3
25
138
MATS is hiring! Join us in advancing AI safety. Apply by Nov 3, 2024. - Research Manager (Berkeley) (1-3 hires) - Community Manager (Berkeley) (1 hire) - Operations Generalist (Berkeley) (1-2 hires) https://t.co/L1xlWRpOPM
matsprogram.org
0
0
5
Interested in AI safety strategy? The MATS curriculum just got updated! https://t.co/9Ch55IsELZ
lesswrong.com
As part of our Summer 2024 Program, MATS ran a series of discussion groups focused on questions and topics we believe are relevant to prioritizing re…
0
2
26
Prof David Krueger is mentoring alignment researchers via MATS, deadline Oct 13th. David is great, maybe you should apply!
Update: it’s live now! https://t.co/1ihsfTgMHP Big thanks to the team for being responsive and accommodating!!
0
4
64
@MATSprogram Alumni Impact Analysis published! 78% of alumni are still working on AI alignment/control and 7% are working on AI capabilities. 68% have published alignment research https://t.co/8akH9fEtEI
2
2
19
@MATSprogram is holding application office hours on Fri Sep 27, at 11 am and 6 pm PT. We will discuss how to apply to MATS (due Oct 6!) and answer your Qs. Register here:
1
3
8