Ryan Kidd
@ryan_kidd44
Followers
2K
Following
12K
Media
48
Statuses
1K
Co-Executive Director @MATSprogram, Co-Founder @LondonSafeAI, Regrantor @Manifund | PhD in physics | Accelerate AI alignment + build a better future for all
Berkeley, CA
Joined March 2019
MATS 9.0 applications are open! Launch your career in AI alignment, governance, and security with our 12-week research program. MATS provides field-leading research mentorship, funding, Berkeley & London offices, housing, and talks/workshops with AI experts.
14
57
282
AI that is “forced to be good” v “genuinely good” Should we care about the difference? (yes!) We’re releasing the first open implementation of character training. We shape the persona of AI assistants in a more robust way than alternatives like prompting or activation steering.
3
32
145
Trading is now live on DYLI. You can now trade collectibles, add cash, and chat directly with other users. 100% verified and secure. Real trades. Real ownership.
15
10
51
Our surveys’ findings that AI researchers assign a median 5-10% to extinction or similar made a splash (NYT, NBC News, TIME..) But people sometimes underestimate our survey’s methodological quality due to various circulating misconceptions. Today, an FAQ correcting key errors:
2
10
65
In 2025, only 4 security vulnerabilities with CVEs were disclosed in OpenSSL = the crypto library securing most of the internet. AISLE @WeAreAisle's autonomous AI system discovered 3 out of the 4. And proposed the fixes that remediated them.
10
21
152
If you think you would make a great RM (or any other essential role), we are still hiring for roles starting in 2026! Come and help us 10x the AI safety & security field! https://t.co/qhzWaaeb0i
matsprogram.org
0
0
5
What do RMs do? Chiefly, they meet every 1-2 weeks with scholars and mentors and provide essential project coordination, people management, and program resources. Great RMs are experienced at a combination of people/program management, executive coaching, and AI safety research.
1
0
4
However, we receive proportionally less RM applicants than for our other essential roles, like Community Managers and Operations Generalists. We expect to hire ~8% of our RM applicants, in contrast to ~3% for other essential roles. Note that we hire more RMs in absolute terms.
2
0
6
MATS prioritizes program quality over mere size. Our Research Managers increase program value by 19% on average for scholars and 20% for mentors. Each RM supports 8-12 scholars (ideally 8) and 1-4 mentors and ensures a high-quality program experience.
1
0
5
Marginal MATS program applicants and mentors are extremely strong. Our funders are excited to see us grow. However, fast scaling is hard and benefits from strong infrastructure and an experienced team. Our Research Managers are an essential component and comprise ~60% of staff.
1
0
6
What is the chief bottleneck in scaling the AI safety field? My hypothesis: research program/people managers. MATS applicants are increasing exponentially and we had 272 mentors apply for Summer 2026 (and expect to accept 20%). Great research managers are our chief constraint!
2
0
13
@sriramk @portalmerchant @deanwball Thanks for your interest. I'm not expecting too much danger in the next 18 months, so these would mostly be small updates, but to answer the question: MORE WORRIED: - Anything that looks like shorter timelines, especially superexponential progress on METR time horizons graph or
metr.org
6
6
206
🔔New paper: Can reasoning models hide their reasoning? We stress-tested Chain-of-Thought (CoT) monitoring and found that while monitors detect ~96% of hidden malicious intent under normal conditions, ⚠️detection can collapse to ~10% under strong obfuscation pressure. 🧵
9
3
15
we're hiring for several roles at @theoremlabs - reach out if you're exploring! - ml research scientists - ml research engineers - compiler engineers - senior swe's
1
4
29
We're hiring for Research Scientists / Engineers! - We closely work with all frontier labs - We're a small org and can move fast - We can choose our own agenda and what we publish We're especially looking for people who enjoy fast empirical research. Deadline: 31 Oct!
17
70
723
Reminder: @METR_Evals is hiring! We have an ambitious research agenda ahead of us, are well-funded, and need senior researchers! We're extending our work on time-horizons, productivity/"uplift", and agent monitoring, and have great research access across industry/gov/academia!
4
21
140
Even if we figure out how to control superintelligent AI systems, the question "Who controls them?" remains. Our MATS scholar Alex Kastner has gamed out a worst-case scenario for power concentration, in which a single CEO becomes AGI dictator.
54
45
367
If you said: “We should have real-time incident reporting for large-scale frontier AI cyber incidents.” A lot of people in DC would say: “That sounds ea/doomer-coded.” And yet incident reporting for large-scale, non-AI cyber incidents is the standard practice of all major
11
39
317
Rolling applications will remain open, but today is the final day for a 2025 start!
0
0
2
Last day to apply to work at MATS! We are the largest AI safety & security fellowship and most prestigious technical AI safety talent pipeline. We need excellent researchers, managers, and generalists. Help us grow the field 10-100x! https://t.co/yHGs5cyQLG
matsprogram.org
MATS is hiring world-class researchers, managers, generalists, and more to help grow our AI safety & security talent pipeline! Apply by Oct 17 for a Dec 1 start.
2
2
14