Ryan Lowe 🥞
@ryan_t_lowe
Followers
6K
Following
1K
Media
44
Statuses
693
full-stack alignment 🥞 @meaningaligned prev: InstructGPT @OpenAI
Berkeley, CA
Joined May 2009
Introducing: Full-Stack Alignment 🥞 A research program dedicated to co-aligning AI systems *and* institutions with what people value. It's the most ambitious project I've ever undertaken. Here's what we're doing: 🧵
13
45
207
Gathering another round of feedback on this piece. Especially want feedback from research mathematicians and (for different reasons) from philosophers who work with decision theory
docs.google.com
Peli Grietzer ([email protected]) On Eudaimonia and Optimization I. What follows for AI alignment if we take the concept of eudaimonia -- active, rational human flourishing -- seriously? I argue...
3
6
28
my experience on "the path" or whatever so far is there's an incredible, shocking amount of karma stored in the body from unfelt emotions over what has been a kind of pretty overall great life. its wild how much the body stored. going through the storehouse is exponential with
18
27
664
this seems like a super cool opportunity for early researchers interested in sociotechnical alignment!! 💥
1
0
3
curious: are there other tech execs that make public commitments to falsifiable "product principles" like this? how does that play out?
0
0
1
most social media companies can't hold to promises like this because of market forces. maybe OpenAI can resist this for a while because it's more of a side business
1
0
5
a fascinating list of principles for Sora. makes me more optimistic. it's worth commending, *IF* there is follow through (especially: "if we can't fix it, we will discontinue it") at a minimum, I'd love transparency around the user satisfaction data over time
We are launching a new app called Sora. This is a combination of a new model called Sora 2, and a new product that makes it easy to create, share, and view videos. This feels to many of us like the “ChatGPT for creativity” moment, and it feels fun and new. There is something
2
2
11
Know anyone building a multi-agent negotiation eval? Ideally not fix params (cost, delivery date) like ANAC but more open terms / clauses Urgent opportunity
0
2
3
New piece! 👽 Diffused AGI "advocate agents" could allow us to solve intractable social and political coordination problems. This same technology offers a foundation for better governance, allowing us to rebuild decaying institutions from the ground up. Link below!
25
55
299
unexpected that anthropic nails The Vibe
0
0
14
Because time will eventually take everything from you, the question of what to sacrifice is only the art of timing
8
30
267
I'm a stuck record, but I think more people should work on the idea of agents as extensions of/advocates for users, and the kinds of institutions that could build on top of this to solve various types of coordination problems. Fast bargaining-in-the-background, instant dispute
13
25
134
Healing is about aligning selves, not fixing parts. Talk this Sat in SF: active inference, nested selves, and a new lens on pain & health.
1
1
17
Something I've been thinking and reading a ton about is the idea that as means grow, meaning seems to shrink. Technology creates means, but we shouldn’t expect it to create meaning. It can create the space for us to create meaning, but actually creating meaning is up to us.
29
13
249
this is going to be very cool. congrats @winslow_strong @kathryndevaney et al!!!
Today we are announcing The Consciousness Foundation ( https://t.co/MqwcrC04KU), a public charity whose purpose is to advance the scientific understanding of consciousness and foster consciousness development, ie things like awakening, healing, and inner development. Why?
0
1
8
Looking for a designer in our network who can look at onboarding flows / social sharing flows and guess what will convert / where people will bail. Paid gig for @meaningaligned + world changing product
15
3
20
Excited for the launch of the position paper that resulted from our Oxford HAI Lab 2025 Thick Models of Choice workshop !
Today we're launching: - A position paper that articulates the conceptual foundations of FSA ( https://t.co/r1UhFtjlEh) - A website which will be the homepage of FSA going forward ( https://t.co/db3R3CEjbD)
5
8
38
Ever since I started thinking seriously about AI value alignment in 2016-7, I've been frustrated by the inadequacy of utility+RL theory to account for the richness of human values. Glad to be part of a larger team now moving beyond those thin theories towards thicker ones.
3
20
133