rabrg Profile Banner
Ryan Greene Profile
Ryan Greene

@rabrg

Followers
4K
Following
2K
Media
22
Statuses
196

perpetual meme machines @OpenAI

Joined April 2010
Don't wanna be here? Send us removal request.
@rabrg
Ryan Greene
2 years
i have great respect for Ilya, and trust in his intentions. but i can’t comprehend the lack of transparency in his decision.
27
19
670
@rabrg
Ryan Greene
2 years
Excited for my first public contribution at @OpenAI — Embeddings v2: it unifies text search, text similarity, and code search, outperforms our best v1 models at most tasks, while being priced 99.8% cheaper!.
@OpenAI
OpenAI
2 years
Our new embedding model is significantly more capable at language processing and code tasks, cost effective, and simpler to use.
18
32
525
@rabrg
Ryan Greene
5 months
too bad the narrow domains the best reasoning models excel at — coding and mathematics — aren't useful for expediting the creation of AGI.
31
15
458
@rabrg
Ryan Greene
5 months
seems pretty obvious
Tweet media one
Tweet media two
@slow_developer
Haider.
5 months
openAI researcher said this yesterday, and today sam said we are near the singularity. what's going on?
Tweet media one
11
22
311
@rabrg
Ryan Greene
2 years
we’re so back.
7
6
254
@rabrg
Ryan Greene
5 months
oh wait.
4
2
222
@rabrg
Ryan Greene
3 months
@tszzl i think we have a worthy contender, internally at least.
16
3
216
@rabrg
Ryan Greene
22 days
is a black hole a little ominous? yes.is it the optimal emblem of a *singularity*? also yes
Tweet media one
2
1
86
@rabrg
Ryan Greene
2 years
i got lucky picking the best week to surround myself with zen
Tweet media one
Tweet media two
Tweet media three
7
0
82
@rabrg
Ryan Greene
1 year
born too late to explore the world, born too early to explore the universe. born just in time to explore latent space.
5
7
59
@rabrg
Ryan Greene
2 years
@harishkgarg the duality of man
Tweet media one
2
0
56
@rabrg
Ryan Greene
1 year
Project Gutenberg, with a better UX for reading and discovering its contents, would rival Wikipedia in accessibility to humanity's knowledge.
3
4
45
@rabrg
Ryan Greene
4 months
@rapha_gl trump: announces one of the largest historic investments in USA's infrastructure to build AGI. journalists at the press conference: do you believe assaulting cops should be illegal?.
4
2
40
@rabrg
Ryan Greene
9 months
drop the "large language". just "model". it's cleaner.
@yacineMTB
kache
9 months
@karpathy even the "large" is suspect because what is large today will seem small in the future.
2
2
36
@rabrg
Ryan Greene
2 years
The most popular use case of our new embedding model is, by far, retrieval-augmented question-answering. Checkout @isafulf's new plugin that makes this a native experience with ChatGPT!.
@gdb
Greg Brockman
2 years
We've written plugins for browsing & Python code execution (amazing for data science use-cases), launched with 11 partners, and have open-sourced a high-quality plugin for retrieval over any data you'd like to make accessible to ChatGPT:.
1
1
24
@rabrg
Ryan Greene
1 year
love the phrasing of "world simulators": imagine the scale of intelligence needed to accurately model relationships between complex entities in a physical environment. the beautiful aesthetics are just a cherry on top
4
1
32
@rabrg
Ryan Greene
2 years
one of the biggest lessons i've learned from my first year doing research is that, when someone comes up with an idea, it's almost certain that someone else has either already done it or is about to do it.
3
0
31
@rabrg
Ryan Greene
1 year
the best, for free. love that @OpenAI's values are aligned to do things like this
Tweet media one
@LiamFedus
William Fedus
1 year
Not only is this the best model in the world, but it's available for free in ChatGPT, which has never before been the case for a frontier model.
0
2
30
@rabrg
Ryan Greene
5 months
@social_rate few words are as cowardly.
1
0
19
@rabrg
Ryan Greene
9 months
@kepano @obsdmd i just want proper annotation support for PDFs 🥺.
1
0
28
@rabrg
Ryan Greene
3 months
I am hearing disturbing rumors that Ilya Sutskever has made the Boltzmann Machine computationally tractable.
4
1
29
@rabrg
Ryan Greene
3 months
@apples_jimmy @tszzl jerry is too effective to be a priest.
0
1
27
@rabrg
Ryan Greene
1 year
i wish more recommendation algorithms gave users control of the diversity of their feeds, like the temperature slider of a language model. give me entropy. help me discover the unknown.
3
0
25
@rabrg
Ryan Greene
2 years
@tszzl phabricator >>>>>> github. don't @ me.
1
1
26
@rabrg
Ryan Greene
1 month
imagine you have a model that puts uniform probability on every possible answer to a question. it is, in other words, random and dumb. if you sample an infinite number times, youll get the correct answer though (pass@infinity = 1). as you give the model more intelligence, you can.
@iruletheworldmo
🍓🍓🍓
1 month
it’s over . turns out the rl victory lap was premature. new tsinghua paper quietly shows the fancy reward loops just squeeze the same tired reasoning paths the base model already knew. pass@1 goes up, sure, but the model’s world actually shrinks. feels like teaching a kid to ace
Tweet media one
2
1
25
@rabrg
Ryan Greene
2 years
@tszzl another word for that something is love.
2
0
22
@rabrg
Ryan Greene
5 months
i mean this literally. given an infinite universe where self-replicating (sustaining) is possible; after enough time, it is *inevitable*, and once created, entropy will destroy all else: the universe becomes more and more selective for the self-replicating.
@rabrg
Ryan Greene
9 months
in a system where self-replication is possible, its optimization is inevitable.
3
1
24
@rabrg
Ryan Greene
2 years
i can’t wait for content recommendation algorithms to get augmented with natural language feedback, letting users granularly and iteratively critique and refine their feeds in real-time.
1
2
13
@rabrg
Ryan Greene
7 months
@michael_nielsen (he requested his name to be removed).
0
0
19
@rabrg
Ryan Greene
2 months
@nickcammarata rare negative valence nick tweet.
0
0
21
@rabrg
Ryan Greene
2 years
@yoavgo profit from embeddings is a rounding error after the cost to serve them; we offer them solely as a tool to augment the generative models.
1
1
21
@rabrg
Ryan Greene
4 months
we live in a time where the Church is writing treatises on the relation between human and artificial intelligence
Tweet media one
2
1
21
@rabrg
Ryan Greene
2 years
just solved alignment
Tweet media one
2
0
13
@rabrg
Ryan Greene
1 month
if the universe is deterministic and was instantiated from a near zero-entropy initial state, it in whole would have a lower Kolmogorov complexity than any particular human construct (e.g Wikipedia).
5
0
19
@rabrg
Ryan Greene
8 months
large language models are Strange Loops: every token ascends through the entire depth of the model, and each subsequent token loops back to the beginning, self-referencing itself through the generation of the previous step.
2
0
19
@rabrg
Ryan Greene
2 years
reading these lines in a metacognition text has me even more deeply appreciating the current state of LLMs
Tweet media one
0
0
8
@rabrg
Ryan Greene
2 months
@DKokotajlo @slatestarcodex @eli_lifland @thlarsen congratulations on the release Daniel!.
1
0
17
@rabrg
Ryan Greene
5 months
@greenteazyn the bounds of effective compute and intelligence are both infinite.
1
2
17
@rabrg
Ryan Greene
1 year
brain-computer interfaces will enable communication through embeddings instead of tokens, increasing the fidelity to thought by OOMs.
2
3
16
@rabrg
Ryan Greene
1 year
reading the writings of my favorite authors sometimes feels like reading an articulation of my subconscious.
1
0
14
@rabrg
Ryan Greene
9 months
in a system where self-replication is possible, its optimization is inevitable.
@newscientist
New Scientist
11 months
A digital "primordial soup" with no rules or direction can lead to the emergence of self-replicating artificial life forms, in an experiment that may hint at how biological life began on Earth. Learn more:
0
1
15
@rabrg
Ryan Greene
2 years
@OpenAI huge thanks to @arvind_io, @lilianweng, @sherwinwu, @sandersted, and many more people that i don't have the twitter handles of for all of the help with this release!.
0
0
14
@rabrg
Ryan Greene
2 years
what @Google lacks in productionized AI assistants, imo they make up for with these new drone shots of parks on Google Maps
0
2
14
@rabrg
Ryan Greene
12 days
some thoughts can only be expressed through a saxophone.
1
1
14
@rabrg
Ryan Greene
1 year
the simplest of changes, like going to a park, or turning on some music, can make a piece of writing resonate so much more deeply with me. like my comprehension is limited by my environment. and it's empowering to remember how easy it is to shape it.
0
0
12
@rabrg
Ryan Greene
2 months
name a more iconic set of keywords, i’ll wait
Tweet media one
1
0
14
@rabrg
Ryan Greene
1 year
being in tune with your tastes compounds: knowing what you like makes it easier to find things you do; the more you discover them, the more opportunities you have to understand what draws you to them.
1
0
12
@rabrg
Ryan Greene
1 year
nature is a language that science tries to model.
0
0
11
@rabrg
Ryan Greene
1 year
it is of no coincidence that "comprehension" and "compression" spring from the same root.
1
0
11
@rabrg
Ryan Greene
4 months
@pli_cachete the average output of a contemporary LLM is of higher quality than the average document from the internet.
1
0
13
@rabrg
Ryan Greene
3 months
wild how much a bay window can improve your qol.
0
0
12
@rabrg
Ryan Greene
15 days
@willdepue pls pls pls somebody pls
@rabrg
Ryan Greene
1 year
Project Gutenberg, with a better UX for reading and discovering its contents, would rival Wikipedia in accessibility to humanity's knowledge.
0
0
12
@rabrg
Ryan Greene
1 year
@yoheinakajima elon was onto something with X Æ A-Xii.
1
1
11
@rabrg
Ryan Greene
22 days
fun fact: the first formal theory on the relation between compression and general intelligence (Solomonoff induction) occurred simultaneously and independently with the coining of the term "singularity" (von Neumann) in the late 1950s. the Zeitgeist knows what's up.
1
0
12
@rabrg
Ryan Greene
1 year
writing is a noisy serialization of the mind. language models are trained to reproduce the noise (as opposed to directly reducing it).
0
1
10
@rabrg
Ryan Greene
2 months
friendly reminder that microchips are the most precise and complex non-biological arrangements of matter in the known universe.
2
0
11
@rabrg
Ryan Greene
3 months
if entropy dictates the arrow of time, intelligence is a kind of time reversal.
1
0
11
@rabrg
Ryan Greene
11 months
nietzsche would’ve been a cracked ai researcher
Tweet media one
0
0
10
@rabrg
Ryan Greene
2 months
@RichardSSutton where'd your laser eyes go?.
1
0
10
@rabrg
Ryan Greene
4 months
it is interesting that two of the most inspiring scientist-philosophers for AI researchers, Douglas Hofstadter (Gödel, Escher, Bach) and David Deutsch (The Beginning of Infinity), are themselves AI skeptics.
2
0
11
@rabrg
Ryan Greene
1 year
metaphors are metaphors for the interconnectedness of nature.
0
0
9
@rabrg
Ryan Greene
2 years
@bcjordan @OpenAI i'm excited to see how such a cheap ($0.40 per 1m tokens), easy-to-use model (versatile representational space, reliable API / don't need to self-host) enables more innovative retrieval-based apps like @omniscience42 to be created and scaled :).
0
1
10
@rabrg
Ryan Greene
10 days
my inner monologue just justified crossing the street diagonally by having "increased sample efficiency". im so fried.
0
0
11
@rabrg
Ryan Greene
2 years
passing copies of books between friends, seeing each other’s notes and highlights, is one of the cutest things ever.
0
0
5
@rabrg
Ryan Greene
22 days
@apples_jimmy we’re trying jimmy.
0
0
10
@rabrg
Ryan Greene
2 years
i can’t help but have fomo of all the spirit going on in the arena though. looking forward to getting back :).
0
0
8
@rabrg
Ryan Greene
1 year
Borges' Funes is a prophetic meditation on large language models: the mapping of words to numbers, the idea of something that has a greater capacity of memory than all of mankind, and contrasting that capacity with that of "thinking".
0
0
8
@rabrg
Ryan Greene
11 months
training language models on their own outputs
Tweet media one
1
1
8
@rabrg
Ryan Greene
1 year
mere mortals is an existential romance between humanity and knowledge.
1
0
8
@rabrg
Ryan Greene
14 days
@dwarkesh_sp perhaps a little too high.
0
0
7
@rabrg
Ryan Greene
3 months
@tszzl yarvin? thiel? never heard of them.
0
0
4
@rabrg
Ryan Greene
11 months
human preferences are a local optimum.
1
1
7
@rabrg
Ryan Greene
1 year
@tszzl 😳.
0
0
8
@rabrg
Ryan Greene
8 months
o1's long chain-of-thoughts make it one of the loopiest models yet.
1
0
8
@rabrg
Ryan Greene
3 months
(this is a prophetic shitpost).
0
0
8
@rabrg
Ryan Greene
1 year
the amount you are able to take away from an experience is scaled by your interest in it; in this sense, being in tune with your tastes is a type of meta-learning.
1
1
7
@rabrg
Ryan Greene
3 months
@kaicathyc @karpathy @rapha_gl saying what we all were thinking.
1
0
7
@rabrg
Ryan Greene
1 year
@willdepue realizing simulation theory, one day at a time.
0
0
7
@rabrg
Ryan Greene
1 year
friends don’t let friends philosophize
Tweet media one
Tweet media two
0
0
7
@rabrg
Ryan Greene
5 months
fwiw, Hofstadter disagrees (i disagree with his disagreement)
Tweet media one
1
0
6
@rabrg
Ryan Greene
6 months
@dwarkesh_sp books referenced in texts you enjoy reading in general. discovered my favorite book this year from a footnote in Schiller's aesthetic letters.
0
0
6
@rabrg
Ryan Greene
1 year
a beautiful implementation of a beautiful idea.
@shr_id
saharan / さはら
2 years
Life Universe Explore the infinitely recursive universe of Game of Life! Works in real-time and is perfectly consistent, never fails to remember where you are and where you came from. 無限に再帰するライフゲームの宇宙を探索できる作品を作りました #indiedev
0
0
6
@rabrg
Ryan Greene
2 years
@MarkovMagnifico tartine is the fuel of agi.
0
0
6
@rabrg
Ryan Greene
8 months
language is the cartography of the mind: words mapping the infinite territory of ideas.
0
0
5
@rabrg
Ryan Greene
6 months
taste points to subjective liminal experiences: ideas that lay on both sides of the frontier of the mind. the part that lays beyond it sparks curiosity, while the part that lays within it provides the requisite understanding to expand towards it.
0
0
5
@rabrg
Ryan Greene
2 years
ilya words it a lot better: i had overestimated the role of new ideas, and underestimated the significance of their execution
@ilyasut
Ilya Sutskever
2 years
Many believe that great AI advances must contain a new “idea”. But it is not so: many of AI’s greatest advances had the form “huh, turns out this familiar unimportant idea, when done right, is downright incredible”.
1
0
5
@rabrg
Ryan Greene
2 years
@_smileyball Higashiyama Jisho-ji (Kyoto), Maruyama Park (Kyoto), and Shinjuku Gyoen Garden (Tokyo)!.
2
0
5
@rabrg
Ryan Greene
2 years
@sullyj3 @ciphergoth it took a couple of turns, but it works
Tweet media one
1
0
5
@rabrg
Ryan Greene
6 months
@alexkehr @sama this is a critical part of our 2025 Q1 roadmap.
2
0
5
@rabrg
Ryan Greene
2 years
@tszzl @sama i think there's virtually nothing that is acceptable by 99% of people.
1
0
5
@rabrg
Ryan Greene
2 years
0
1
5
@rabrg
Ryan Greene
1 year
noah should be sued
Tweet media one
0
1
4
@rabrg
Ryan Greene
1 year
@Plinz @turincomplete who were the others?.
1
0
3
@rabrg
Ryan Greene
2 years
@yoheinakajima memory is the outcome of learning. novelty is a prerequisite for learning (you can’t learn what you already know).
0
0
3
@rabrg
Ryan Greene
3 months
this post was sponsored by Nick Land.
0
0
3
@rabrg
Ryan Greene
5 months
@lmcorrigan1 yes, for they are the only things capable of combatting it.
0
0
4
@rabrg
Ryan Greene
1 year
@VorticonCmdr @willdepue everyone should support everything that democratizes knowledge.
0
0
3
@rabrg
Ryan Greene
1 month
the future sounds like synths and strings.
0
0
4
@rabrg
Ryan Greene
4 months
0
0
4
@rabrg
Ryan Greene
2 years
@Muennighoff @openais @hongjin_su @WeijiaShi2 @MSFTResearch Props to the authors of INSTRUCTOR and E5! They were some of my favorite embeddings papers from the past year, along with MRL, Promptagator, Relative Representations, and, of course, your MTEB :).
0
0
4