MATSprogram Profile Banner
MATS Research Profile
MATS Research

@MATSprogram

Followers
2K
Following
104
Media
1
Statuses
62

MATS empowers researchers to advance AI alignment, governance, and security

Berkeley, CA
Joined November 2023
Don't wanna be here? Send us removal request.
@ryan_kidd44
Ryan Kidd
23 days
Last day to apply to work at MATS! We are the largest AI safety & security fellowship and most prestigious technical AI safety talent pipeline. We need excellent researchers, managers, and generalists. Help us grow the field 10-100x! https://t.co/yHGs5cyQLG
Tweet card summary image
matsprogram.org
@ryan_kidd44
Ryan Kidd
2 months
MATS is hiring world-class researchers, managers, generalists, and more to help grow our AI safety & security talent pipeline! Apply by Oct 17 for a Dec 1 start.
2
2
14
@hackaprompt
HackAPrompt
27 days
We partnered w/ @OpenAI, @AnthropicAI, & @GoogleDeepMind to show that the way we evaluate new models against Prompt Injection/Jailbreaks is BROKEN We compared Humans on @HackAPrompt vs. Automated AI Red Teaming Humans broke every defense/model we evaluated… 100% of the time🧵
9
82
243
@MiTerekhov
Mikhail Terekhov
26 days
You are a scheming AI. You find yourself deployed with an AI control protocol, you know the details. You remember the papers. The lab is using a trusted monitor. What do you do next? Our new paper argues—with modern protocols, prompt injections will easily let you win! (1/9)🧵
2
15
64
@MATSprogram
MATS Research
1 month
Amazing work, Bartosz!
@bartoszcyw
Bartosz Cywiński
1 month
Can we catch an AI hiding information from us? To find out, we trained LLMs to keep secrets: things they know but refuse to say. Then we tested black-box & white-box interp methods for uncovering them and many worked! We release our models so you can test your own techniques too!
0
0
7
@labenz
Nathan Labenz
2 months
7) MATS They provide short, intensive training programs that help people transition their careers into mechanistic interpretability and other AI safety work. Check out the mentors listed in this thread – a true who's who of top safety researchers https://t.co/H1jXtW3me5
@ryan_kidd44
Ryan Kidd
2 months
MATS 9.0 applications are open! Launch your career in AI alignment, governance, and security with our 12-week research program. MATS provides field-leading research mentorship, funding, Berkeley & London offices, housing, and talks/workshops with AI experts.
1
2
25
@ryan_kidd44
Ryan Kidd
1 month
The AI safety & security research field is growing by 25% per year. At this rate, there will be 8.5k researchers when we reach AGI.
6
5
58
@ryan_kidd44
Ryan Kidd
1 month
Last day to apply to MATS Winter 2026 Program! Launch your AI safety/security career with the leading fellowship program
Tweet card summary image
matsprogram.org
@ryan_kidd44
Ryan Kidd
2 months
MATS 9.0 applications are open! Launch your career in AI alignment, governance, and security with our 12-week research program. MATS provides field-leading research mentorship, funding, Berkeley & London offices, housing, and talks/workshops with AI experts.
2
7
41
@ryan_kidd44
Ryan Kidd
2 months
MATS is hiring world-class researchers, managers, generalists, and more to help grow our AI safety & security talent pipeline! Apply by Oct 17 for a Dec 1 start.
4
9
48
@ryan_kidd44
Ryan Kidd
2 months
MATS Summer 2025 produced some excellent research in AI alignment, transparency, and security!
1
2
31
@CameronHolmes92
Cameron Holmes ✈️ Berkeley
2 months
Incredible work by 3x @MATSprogram alumni and a great example of applied Mech Interp beating black box baselines and making significant progress on critical real-world problems:
@OBalcells
Oscar Balcells Obeso
2 months
Imagine if ChatGPT highlighted every word it wasn't sure about. We built a streaming hallucination detector that flags hallucinations in real-time.
2
3
25
@ryan_kidd44
Ryan Kidd
2 months
Applications to mentor in MATS Summer 2026 are now open! Mentorship is a 12-week, ≥1 h/week commitment. Let's build the field of AI safety & security together! https://t.co/3Fs5veoIW8
1
5
25
@ryan_kidd44
Ryan Kidd
2 months
MATS 9.0 applications are open! Launch your career in AI alignment, governance, and security with our 12-week research program. MATS provides field-leading research mentorship, funding, Berkeley & London offices, housing, and talks/workshops with AI experts.
14
57
279
@ryan_kidd44
Ryan Kidd
6 months
Last chance to apply to work at MATS! Still taking applications for Research Managers, Community Managers, and Operations Generalists. Apply by May 2! https://t.co/rDq77CtO9p
Tweet card summary image
matsprogram.org
1
4
19
@ryan_kidd44
Ryan Kidd
8 months
@MATSprogram Summer 2025 applications close Apr 18! Come help advance the fields of AI alignment, security, and governance with mentors including @NeelNanda5 @EthanJPerez @OwainEvans_UK @EvanHub @bshlgrs @dawnsongtweets @DavidSKrueger @RichardMCNgo and more!
3
25
138
@MATSprogram
MATS Research
1 year
MATS is hiring! Join us in advancing AI safety. Apply by Nov 3, 2024. - Research Manager (Berkeley) (1-3 hires) - Community Manager (Berkeley) (1 hire) - Operations Generalist (Berkeley) (1-2 hires) https://t.co/L1xlWRpOPM
Tweet card summary image
matsprogram.org
0
0
5
@NeelNanda5
Neel Nanda
1 year
Prof David Krueger is mentoring alignment researchers via MATS, deadline Oct 13th. David is great, maybe you should apply!
@DavidSKrueger
David Krueger
1 year
Update: it’s live now! https://t.co/1ihsfTgMHP Big thanks to the team for being responsive and accommodating!!
0
4
64
@ryan_kidd44
Ryan Kidd
1 year
@MATSprogram Alumni Impact Analysis published! 78% of alumni are still working on AI alignment/control and 7% are working on AI capabilities. 68% have published alignment research https://t.co/8akH9fEtEI
2
2
19
@ryan_kidd44
Ryan Kidd
1 year
@MATSprogram is holding application office hours on Fri Sep 27, at 11 am and 6 pm PT. We will discuss how to apply to MATS (due Oct 6!) and answer your Qs. Register here:
1
3
8