ryan_t_lowe Profile Banner
Ryan Lowe 🥞 Profile
Ryan Lowe 🥞

@ryan_t_lowe

Followers
6K
Following
1K
Media
44
Statuses
693

full-stack alignment 🥞 @meaningaligned prev: InstructGPT @OpenAI

Berkeley, CA
Joined May 2009
Don't wanna be here? Send us removal request.
@ryan_t_lowe
Ryan Lowe 🥞
4 months
Introducing: Full-Stack Alignment 🥞 A research program dedicated to co-aligning AI systems *and* institutions with what people value. It's the most ambitious project I've ever undertaken. Here's what we're doing: 🧵
13
45
207
@ryan_t_lowe
Ryan Lowe 🥞
6 days
🥞
@SmithaMilli
smitha milli
8 days
can we finally use natural language to optimize for deeper notions of what users want from their recommender systems?
0
0
2
@peligrietzer
Peli Grietzer
9 days
Gathering another round of feedback on this piece. Especially want feedback from research mathematicians and (for different reasons) from philosophers who work with decision theory
Tweet card summary image
docs.google.com
Peli Grietzer ([email protected]) On Eudaimonia and Optimization I. What follows for AI alignment if we take the concept of eudaimonia -- active, rational human flourishing -- seriously? I argue...
3
6
28
@ryan_t_lowe
Ryan Lowe 🥞
14 days
this is the way
@maxwellfarrens
Max Farrens
14 days
Potentially controversial, but we don’t really track metrics at the Dwarkesh Podcast. Our goal with the show is to understand what the next decade looks like. We know this has nothing to do with episode performance, but virality is always a tempting target. So we intentionally
0
0
3
@nickcammarata
Nick
18 days
my experience on "the path" or whatever so far is there's an incredible, shocking amount of karma stored in the body from unfelt emotions over what has been a kind of pretty overall great life. its wild how much the body stored. going through the storehouse is exponential with
18
27
664
@ryan_t_lowe
Ryan Lowe 🥞
1 month
this seems like a super cool opportunity for early researchers interested in sociotechnical alignment!! 💥
@sebkrier
Séb Krier
2 months
Making cooperation great again with @yonashav 🖇️
1
0
3
@ryan_t_lowe
Ryan Lowe 🥞
1 month
curious: are there other tech execs that make public commitments to falsifiable "product principles" like this? how does that play out?
0
0
1
@ryan_t_lowe
Ryan Lowe 🥞
1 month
most social media companies can't hold to promises like this because of market forces. maybe OpenAI can resist this for a while because it's more of a side business
1
0
5
@ryan_t_lowe
Ryan Lowe 🥞
1 month
a fascinating list of principles for Sora. makes me more optimistic. it's worth commending, *IF* there is follow through (especially: "if we can't fix it, we will discontinue it") at a minimum, I'd love transparency around the user satisfaction data over time
@sama
Sam Altman
1 month
We are launching a new app called Sora. This is a combination of a new model called Sora 2, and a new product that makes it easy to create, share, and view videos. This feels to many of us like the “ChatGPT for creativity” moment, and it feels fun and new. There is something
2
2
11
@edelwax
Joe Edelman 🥞
1 month
Know anyone building a multi-agent negotiation eval? Ideally not fix params (cost, delivery date) like ANAC but more open terms / clauses Urgent opportunity
0
2
3
@sebkrier
Séb Krier
1 month
New piece! 👽 Diffused AGI "advocate agents" could allow us to solve intractable social and political coordination problems. This same technology offers a foundation for better governance, allowing us to rebuild decaying institutions from the ground up. Link below!
25
55
299
@ryan_t_lowe
Ryan Lowe 🥞
2 months
unexpected that anthropic nails The Vibe
@claudeai
Claude
2 months
Keep thinking.
0
0
14
@VividVoid_
Vivid Void
2 months
Because time will eventually take everything from you, the question of what to sacrifice is only the art of timing
8
30
267
@sebkrier
Séb Krier
3 months
I'm a stuck record, but I think more people should work on the idea of agents as extensions of/advocates for users, and the kinds of institutions that could build on top of this to solve various types of coordination problems. Fast bargaining-in-the-background, instant dispute
13
25
134
@maxkshen
Max Shen
3 months
Healing is about aligning selves, not fixing parts. Talk this Sat in SF: active inference, nested selves, and a new lens on pain & health.
1
1
17
@packyM
Packy McCormick
3 months
Something I've been thinking and reading a ton about is the idea that as means grow, meaning seems to shrink. Technology creates means, but we shouldn’t expect it to create meaning. It can create the space for us to create meaning, but actually creating meaning is up to us.
29
13
249
@ryan_t_lowe
Ryan Lowe 🥞
3 months
this is going to be very cool. congrats @winslow_strong @kathryndevaney et al!!!
@winslow_strong
Winslow.Ξ
3 months
Today we are announcing The Consciousness Foundation ( https://t.co/MqwcrC04KU), a public charity whose purpose is to advance the scientific understanding of consciousness and foster consciousness development, ie things like awakening, healing, and inner development. Why?
0
1
8
@edelwax
Joe Edelman 🥞
3 months
Looking for a designer in our network who can look at onboarding flows / social sharing flows and guess what will convert / where people will bail. Paid gig for @meaningaligned + world changing product
15
3
20
@PhilippKoralus
Philipp Koralus
4 months
Excited for the launch of the position paper that resulted from our Oxford HAI Lab 2025 Thick Models of Choice workshop !
@ryan_t_lowe
Ryan Lowe 🥞
4 months
Today we're launching: - A position paper that articulates the conceptual foundations of FSA ( https://t.co/r1UhFtjlEh) - A website which will be the homepage of FSA going forward ( https://t.co/db3R3CEjbD)
5
8
38
@xuanalogue
xuan (ɕɥɛn / sh-yen)
4 months
Ever since I started thinking seriously about AI value alignment in 2016-7, I've been frustrated by the inadequacy of utility+RL theory to account for the richness of human values. Glad to be part of a larger team now moving beyond those thin theories towards thicker ones.
3
20
133