
Ryan Lowe 🥞 @ICML
@ryan_t_lowe
Followers
6K
Following
1K
Media
43
Statuses
662
full-stack alignment 🥞 @meaningaligned prev: InstructGPT @OpenAI 🦋 @ ryantlowe
Berkeley, CA
Joined May 2009
RT @PhilippKoralus: Excited for the launch of the position paper that resulted from our Oxford HAI Lab 2025 Thick Models of Choice workshop….
0
7
0
RT @xuanalogue: Ever since I started thinking seriously about AI value alignment in 2016-7, I've been frustrated by the inadequacy of utili….
0
19
0
RT @RyanOthKearns: It was terrifically energising to work on this position paper. Floored by the ambition and optimism coming out of the @m….
0
2
0
I expect @j_foerst will do some of the best FSA-relevant research around, particularly on "win-win AI negotiation". if you're about to do a PhD strongly consider joining him at @FLAIR_Ox !!.
The term "AI alignment" is often used without specifying "to whom?" and much of the work on AI alignment in practice looks more like "AI controllability" without answering "who controls the controller?" (i.e. user or operator). One key challenge is that alignment is fundamentally.
0
0
7
RT @Dr_Atoosa: Excited to be a contributor to full-stack alignment (FSA) ⭐️ you can read our position paper about the conceptual foundation….
0
4
0
RT @DefenderOfBasic: all of the AI alignment efforts are obviously guaranteed to fail because they're trying to do it in isolation, except….
0
4
0
I guess now is also a good time to announce that I've officially joined @meaningaligned!!. I'll be working on field building for full-stack alignment -- helping nurture this effort into a research community with excellent vibes that gets shit done . weeeeeeeeeee 🚀🚀.
Introducing: Full-Stack Alignment 🥞. A research program dedicated to co-aligning AI systems *and* institutions with what people value. It's the most ambitious project I've ever undertaken. Here's what we're doing: 🧵
2
3
54
RT @klingefjord: Extremely honored to be working on this project alongside a series of amazing researchers!!. This research program is our….
0
3
0
RT @IasonGabriel: Check out this great new initiative + paper led by @ryan_t_lowe, @edelwax, @xuanalogue, @klingefjord & the fine folks @me….
0
9
0
RT @edelwax: In 2017, I was working to change FB News Feed's recommender to use “thick models of value” (per the paper we just released). @….
0
6
0
If you're excited about these ideas, drop me a line!! We're looking for researchers to collaborate with -- send an email to research@meaningalignment.org. It's gonna be fun. ✌️.
5
1
32
Examples of TMV include: resource-rational contractualism by @sydneymlevine et al, self-other overlap by.@MarcCarauleanu et al, and our previous work on moral graph elicitation. It's an emerging field, but we think early research is very promising!!.
1
0
16