SPAR
@SPARexec
Followers
253
Following
29
Media
0
Statuses
29
We're a part-time, virtual research program that gives students and early career professionals an opportunity to work with professional AI safety researchers.
Joined March 2024
π¨ SPAR Spring 2026 mentor applications are open! SPAR is a part-time, remote research program where mentors lead 3-month projects on impactful topics in AI safety, governance, and policy with funding for compute + project expenses.
1
2
15
you should (normative claim) go be a SPAR mentor. the pool of mentees is like pretty stacked imo and more people to help you execute on important ideas in exchange for mentorship is a pretty sweet deal.
π¨ SPAR Spring 2026 mentor applications are open! SPAR is a part-time, remote research program where mentors lead 3-month projects on impactful topics in AI safety, governance, and policy with funding for compute + project expenses.
1
4
40
Great opportunity to mentor strong early-stage researchers/people looking to join the good fight
π¨ SPAR Spring 2026 mentor applications are open! SPAR is a part-time, remote research program where mentors lead 3-month projects on impactful topics in AI safety, governance, and policy with funding for compute + project expenses.
0
2
4
Applications are open for the Spring 2026 Pathfinder Fellowship! Pathfinder supports students leading (or starting!) AI safety & AI policy university groups around the world. Hereβs what itβs all about π
1
2
5
Weβre looking for: - AI safety or policy researchers - Grad & PhD students - Alumni of research fellowships Learn more:
sparai.org
Mentor the next generation of researchers in AI safety and policy. Submit research project proposals and work with talented mentees to advance your research agenda.
0
0
1
Mentorship at SPAR means collaborating with top early-career talent, expanding your research capacity, and potentially finding long-term collaborators Apply by Dec 1 to mentor in Spring 2026 π
fillout.kairos-project.org
Made with Fillout, the best way to make forms, surveys and quizzes your audience will answer.
1
0
2
I'm hiring! Kairos is seeking a Founding Generalist (90k-150k, remote) to join our core team and work with us to scale our programs. We're focused on accelerating talent into AI safety and policy. Also: $5k for referrals that get us a hire who stays for 6 months!
5
9
84
Work on a part-time AI safety, governance, security, and strategy project. Open to students & professionals, prior research experience not required for all projects.
1
0
2
Only 1 day left to apply for SPARβs Fall 2025 cohort! Apply by August 20 to join the largest SPAR round yet β 80+ projects!
1
2
8
@Tianyi_Alex_Qiu and I will be co-mentoring a SPAR stream, please apply if you are bugged by - LLMs don't seek truth over confirmation/sycophancy in open-ended problems; - We lack human data about how LLMs could help with human learning, making judgments, decision-making; -
π We're excited to announce that mentee applications are now open for the Fall 2025 round of the SPAR research program! This will be our largest round ever, featuring 80+ projects across AI safety, policy, governance, security, and strategy.
1
4
9
You have 2 days left to apply to my SPAR project and work with me on model diffing! Depending on mentees interests, we'll focus on things like building auto-diffing pipelines, analyzing model personas or diffing-based circuits. Application link in thread βοΈ
π We're excited to announce that mentee applications are now open for the Fall 2025 round of the SPAR research program! This will be our largest round ever, featuring 80+ projects across AI safety, policy, governance, security, and strategy.
2
1
10
Awesome research opportunity to work with Rishub, if you're interested in how humans and AIs can work together!
Interested in a part-time research fellowship to understand how best Humans and AI can work together to perform well at AI-evaluation tasks, and better detect AI-misalignment? Join my SPAR project!! (1/4)
0
1
11
Interested in a part-time research fellowship to understand how best Humans and AI can work together to perform well at AI-evaluation tasks, and better detect AI-misalignment? Join my SPAR project!! (1/4)
3
8
104
SPAR was an amazing experience for me ! This time's there more interesting projects
π We're excited to announce that mentee applications are now open for the Fall 2025 round of the SPAR research program! This will be our largest round ever, featuring 80+ projects across AI safety, policy, governance, security, and strategy.
0
1
3
Consider applying to work with me on agency and goal-directedness π€ If you do, please do fill in the project-specific questions ;)
π We're excited to announce that mentee applications are now open for the Fall 2025 round of the SPAR research program! This will be our largest round ever, featuring 80+ projects across AI safety, policy, governance, security, and strategy.
0
3
13
Excited to be a SPAR mentor this Fall, come work with me on figuring out how to measure explanatory faithfulness for LLMs!
π We're excited to announce that mentee applications are now open for the Fall 2025 round of the SPAR research program! This will be our largest round ever, featuring 80+ projects across AI safety, policy, governance, security, and strategy.
1
1
9
@zhonghaohe and I are mentoring at SPAR this round! Consider applying if you are interested in the epistemic impact of AI, and the use of AI to facilitate moral/knowledge progress. We welcome both technically-minded and social science-minded people.
π We're excited to announce that mentee applications are now open for the Fall 2025 round of the SPAR research program! This will be our largest round ever, featuring 80+ projects across AI safety, policy, governance, security, and strategy.
0
3
6
ποΈ Applications are open through August 20, but we recommend applying early as many mentors review applications on a rolling basis.
0
0
2
Explore the available projects and mentors here:
sparai.org
SPAR connects rising talent with experts in AI safety and policy through structured mentorship and impactful research projects. Apply to work on research addressing risks from advanced AI.
1
0
3