ryan_kidd44 Profile Banner
Ryan Kidd Profile
Ryan Kidd

@ryan_kidd44

Followers
2K
Following
12K
Media
48
Statuses
1K

Co-Executive Director @MATSprogram, Co-Founder @LondonSafeAI, Regrantor @Manifund | PhD in physics | Accelerate AI alignment + build a better future for all

Berkeley, CA
Joined March 2019
Don't wanna be here? Send us removal request.
@ryan_kidd44
Ryan Kidd
2 months
MATS 9.0 applications are open! Launch your career in AI alignment, governance, and security with our 12-week research program. MATS provides field-leading research mentorship, funding, Berkeley & London offices, housing, and talks/workshops with AI experts.
14
57
282
@_maiush
Sharan
2 days
AI that is “forced to be good” v “genuinely good” Should we care about the difference? (yes!) We’re releasing the first open implementation of character training. We shape the persona of AI assistants in a more robust way than alternatives like prompting or activation steering.
3
32
145
@dyli_io
DYLI
15 hours
Trading is now live on DYLI. You can now trade collectibles, add cash, and chat directly with other users. 100% verified and secure. Real trades. Real ownership.
15
10
51
@AIImpacts
AI Impacts
6 days
Our surveys’ findings that AI researchers assign a median 5-10% to extinction or similar made a splash (NYT, NBC News, TIME..) But people sometimes underestimate our survey’s methodological quality due to various circulating misconceptions. Today, an FAQ correcting key errors:
2
10
65
@stanislavfort
Stanislav Fort
8 days
In 2025, only 4 security vulnerabilities with CVEs were disclosed in OpenSSL = the crypto library securing most of the internet. AISLE @WeAreAisle's autonomous AI system discovered 3 out of the 4. And proposed the fixes that remediated them.
10
21
152
@yonashav
Yo Shavit
10 days
@sebkrier and I are pretty floored by the quality of MATS applicants
9
10
342
@ryan_kidd44
Ryan Kidd
10 days
If you think you would make a great RM (or any other essential role), we are still hiring for roles starting in 2026! Come and help us 10x the AI safety & security field! https://t.co/qhzWaaeb0i
Tweet card summary image
matsprogram.org
0
0
5
@ryan_kidd44
Ryan Kidd
10 days
What do RMs do? Chiefly, they meet every 1-2 weeks with scholars and mentors and provide essential project coordination, people management, and program resources. Great RMs are experienced at a combination of people/program management, executive coaching, and AI safety research.
1
0
4
@ryan_kidd44
Ryan Kidd
10 days
However, we receive proportionally less RM applicants than for our other essential roles, like Community Managers and Operations Generalists. We expect to hire ~8% of our RM applicants, in contrast to ~3% for other essential roles. Note that we hire more RMs in absolute terms.
2
0
6
@ryan_kidd44
Ryan Kidd
10 days
MATS prioritizes program quality over mere size. Our Research Managers increase program value by 19% on average for scholars and 20% for mentors. Each RM supports 8-12 scholars (ideally 8) and 1-4 mentors and ensures a high-quality program experience.
1
0
5
@ryan_kidd44
Ryan Kidd
10 days
Marginal MATS program applicants and mentors are extremely strong. Our funders are excited to see us grow. However, fast scaling is hard and benefits from strong infrastructure and an experienced team. Our Research Managers are an essential component and comprise ~60% of staff.
1
0
6
@ryan_kidd44
Ryan Kidd
10 days
What is the chief bottleneck in scaling the AI safety field? My hypothesis: research program/people managers. MATS applicants are increasing exponentially and we had 272 mentors apply for Summer 2026 (and expect to accept 20%). Great research managers are our chief constraint!
2
0
13
@slatestarcodex
Scott Alexander
10 days
@sriramk @portalmerchant @deanwball Thanks for your interest. I'm not expecting too much danger in the next 18 months, so these would mostly be small updates, but to answer the question: MORE WORRIED: - Anything that looks like shorter timelines, especially superexponential progress on METR time horizons graph or
Tweet card summary image
metr.org
6
6
206
@imwendering
Wen X.
13 days
🔔New paper: Can reasoning models hide their reasoning? We stress-tested Chain-of-Thought (CoT) monitoring and found that while monitors detect ~96% of hidden malicious intent under normal conditions, ⚠️detection can collapse to ~10% under strong obfuscation pressure. 🧵
9
3
15
@vaidehiagrwalla
Vaidehi Agarwalla
14 days
we're hiring for several roles at @theoremlabs - reach out if you're exploring! - ml research scientists - ml research engineers - compiler engineers - senior swe's
@diagram_chaser
Jason Gross
14 days
If you worked on AI or compilers at Meta and were impacted by the layoffs, DM me.
1
4
29
@MariusHobbhahn
Marius Hobbhahn
14 days
We're hiring for Research Scientists / Engineers! - We closely work with all frontier labs - We're a small org and can move fast - We can choose our own agenda and what we publish We're especially looking for people who enjoy fast empirical research. Deadline: 31 Oct!
17
70
723
@ChrisPainterYup
Chris Painter
14 days
Reminder: @METR_Evals is hiring! We have an ambitious research agenda ahead of us, are well-funded, and need senior researchers! We're extending our work on time-horizons, productivity/"uplift", and agent monitoring, and have great research access across industry/gov/academia!
4
21
140
@DKokotajlo
Daniel Kokotajlo
16 days
Even if we figure out how to control superintelligent AI systems, the question "Who controls them?" remains. Our MATS scholar Alex Kastner has gamed out a worst-case scenario for power concentration, in which a single CEO becomes AGI dictator.
54
45
367
@deanwball
Dean W. Ball
16 days
If you said: “We should have real-time incident reporting for large-scale frontier AI cyber incidents.” A lot of people in DC would say: “That sounds ea/doomer-coded.” And yet incident reporting for large-scale, non-AI cyber incidents is the standard practice of all major
11
39
317
@ryan_kidd44
Ryan Kidd
20 days
Rolling applications will remain open, but today is the final day for a 2025 start!
0
0
2
@ryan_kidd44
Ryan Kidd
20 days
Last day to apply to work at MATS! We are the largest AI safety & security fellowship and most prestigious technical AI safety talent pipeline. We need excellent researchers, managers, and generalists. Help us grow the field 10-100x! https://t.co/yHGs5cyQLG
Tweet card summary image
matsprogram.org
@ryan_kidd44
Ryan Kidd
2 months
MATS is hiring world-class researchers, managers, generalists, and more to help grow our AI safety & security talent pipeline! Apply by Oct 17 for a Dec 1 start.
2
2
14