
Ryan Greene
@rabrg
Followers
4K
Following
2K
Media
22
Statuses
196
Excited for my first public contribution at @OpenAI — Embeddings v2: it unifies text search, text similarity, and code search, outperforms our best v1 models at most tasks, while being priced 99.8% cheaper!.
Our new embedding model is significantly more capable at language processing and code tasks, cost effective, and simpler to use.
18
32
525
drop the "large language". just "model". it's cleaner.
@karpathy even the "large" is suspect because what is large today will seem small in the future.
2
2
36
The most popular use case of our new embedding model is, by far, retrieval-augmented question-answering. Checkout @isafulf's new plugin that makes this a native experience with ChatGPT!.
We've written plugins for browsing & Python code execution (amazing for data science use-cases), launched with 11 partners, and have open-sourced a high-quality plugin for retrieval over any data you'd like to make accessible to ChatGPT:.
1
1
24
the best, for free. love that @OpenAI's values are aligned to do things like this
Not only is this the best model in the world, but it's available for free in ChatGPT, which has never before been the case for a frontier model.
0
2
30
imagine you have a model that puts uniform probability on every possible answer to a question. it is, in other words, random and dumb. if you sample an infinite number times, youll get the correct answer though (pass@infinity = 1). as you give the model more intelligence, you can.
it’s over . turns out the rl victory lap was premature. new tsinghua paper quietly shows the fancy reward loops just squeeze the same tired reasoning paths the base model already knew. pass@1 goes up, sure, but the model’s world actually shrinks. feels like teaching a kid to ace
2
1
25
i mean this literally. given an infinite universe where self-replicating (sustaining) is possible; after enough time, it is *inevitable*, and once created, entropy will destroy all else: the universe becomes more and more selective for the self-replicating.
3
1
24
@OpenAI huge thanks to @arvind_io, @lilianweng, @sherwinwu, @sandersted, and many more people that i don't have the twitter handles of for all of the help with this release!.
0
0
14
@pli_cachete the average output of a contemporary LLM is of higher quality than the average document from the internet.
1
0
13
@willdepue pls pls pls somebody pls
Project Gutenberg, with a better UX for reading and discovering its contents, would rival Wikipedia in accessibility to humanity's knowledge.
0
0
12
@bcjordan @OpenAI i'm excited to see how such a cheap ($0.40 per 1m tokens), easy-to-use model (versatile representational space, reliable API / don't need to self-host) enables more innovative retrieval-based apps like @omniscience42 to be created and scaled :).
0
1
10
@dwarkesh_sp books referenced in texts you enjoy reading in general. discovered my favorite book this year from a footnote in Schiller's aesthetic letters.
0
0
6
a beautiful implementation of a beautiful idea.
Life Universe Explore the infinitely recursive universe of Game of Life! Works in real-time and is perfectly consistent, never fails to remember where you are and where you came from. 無限に再帰するライフゲームの宇宙を探索できる作品を作りました #indiedev
0
0
6
ilya words it a lot better: i had overestimated the role of new ideas, and underestimated the significance of their execution
Many believe that great AI advances must contain a new “idea”. But it is not so: many of AI’s greatest advances had the form “huh, turns out this familiar unimportant idea, when done right, is downright incredible”.
1
0
5
@_smileyball Higashiyama Jisho-ji (Kyoto), Maruyama Park (Kyoto), and Shinjuku Gyoen Garden (Tokyo)!.
2
0
5
@ilyas121_real @mattshumer_ @OpenAI some inspiration:.- @rileytomasek's - @ultrasoundchad's - @CedricMakes's - @itsandrewgao
0
1
5
@yoheinakajima memory is the outcome of learning. novelty is a prerequisite for learning (you can’t learn what you already know).
0
0
3
@Muennighoff @openais @hongjin_su @WeijiaShi2 @MSFTResearch Props to the authors of INSTRUCTOR and E5! They were some of my favorite embeddings papers from the past year, along with MRL, Promptagator, Relative Representations, and, of course, your MTEB :).
0
0
4