rewon Profile
rewon

@rewonfc

Followers
1K
Following
1K
Media
4
Statuses
101

Here for the research papers

San Francisco, CA
Joined November 2012
Don't wanna be here? Send us removal request.
@rewonfc
rewon
15 days
some news: i quit MSFT last week after 3.5 years with the inflection/microsoft AI crew -- lots of memorable times. i'm taking some time to reconnect w/ old colleagues and friends and its been great. if you are reading this and want to catch up, DM me!!
1
0
11
@mustafasuleyman
Mustafa Suleyman
2 years
Excited to announce that we’ve raised $1.3B to build one of the largest clusters in the world and turbocharge the creation of Pi, your personal AI. https://t.co/p5AfRXGPan
142
316
3K
@inflectionAI
Inflection AI
2 years
It’s a big week! We’ve raised $1.3 billion and are building the world’s largest AI cluster (22k H100s). We’re grateful for our investors and new funding that will help us accelerate our mission to make personal AI available to every person in the world.
inflection.ai
Inflection AI builds the world’s largest AI cluster with 22,000 NVIDIA H100 GPUs, backed by $1.3B funding from Microsoft, NVIDIA, Bill Gates, and more.
72
216
2K
@mustafasuleyman
Mustafa Suleyman
2 years
We have amazing results to announce! Inflection-1 is our new best-in-class LLM powering Pi, outperforming GPT-3.5, Llama and PALM-540B on major benchmarks commonly used for comparing LLMs.
Tweet card summary image
inflection.ai
Inflection AI’s Inflection-1 LLM powers Pi, your personal AI, outperforming GPT-3.5 and LLaMA. Scalable, safe, and designed for everyone. Try Pi at Pi.ai!
21
89
529
@kritipraks
Kritika Prakash
4 years
Kullback-Leibler divergence is not the same as Leibler-Kullback divergence
49
285
3K
@_j_towns
Jamie Townsend
5 years
I've ported @rewonfc's very deep VAE ( https://t.co/0lQp9BQgcI) from PyTorch to JAX/Flax! Hope other JAX users find this SOTA VAE useful as a forkable baseline... https://t.co/rJno4lcLSO.
Tweet card summary image
github.com
Very deep VAEs in JAX/Flax. Contribute to j-towns/vdvae-jax development by creating an account on GitHub.
2
11
94
@arankomatsuzaki
Aran Komatsuzaki
5 years
My post on SotA image generative models was released 🥳 Featured 7 notable recent papers with emphasis on: - VD-VAE - VAE + discriminator (e.g. VQGAN, DC-VAE) - Diffusion models (e.g. DDPMv2) Plus some notes on scaling (e.g. DALL-E) and evaluation. https://t.co/uOV6jVwdmA
Tweet card summary image
arankomatsuzaki.wordpress.com
I have aggregated some of the SotA image generative models released recently, with short summaries, visualizations and comments. The overall development is summarized, and the future trends are spe…
1
56
295
@dpkingma
Durk Kingma
5 years
IMO, best empirical proof to date that AI can be creative. After this sinks in, will there be any naysayers left?
@dpkingma
Durk Kingma
5 years
"The images are preprocessed to 256x256 resolution during training. [...] each image is compressed to a 32x32 grid of discrete latent codes using a discrete VAE that we pre-trained using a continuous relaxation." GPT + VAE + scale = impressive results! https://t.co/smbQcICNCL
8
6
129
@rewonfc
rewon
5 years
Congrats to Aditya and the rest of the team for an awesome release!
@ilyasut
Ilya Sutskever
5 years
Synthetic capybaras in different styles https://t.co/AXQ5mTIkuy
0
0
14
@rewonfc
rewon
5 years
It is easy to write a program but it is difficult to create a machine that will read those lines. (Was looking through my journal, found this gpt-3 generation conditioned on haikus)
0
0
18
@rewonfc
rewon
5 years
Really thought-provoking work -- congrats to the authors @poolio @YSongStanford @dpkingma and more!
@DrYangSong
Yang Song
5 years
Happy to announce our new work on score-based generative modeling: high quality samples, exact log-likelihoods, and controllable generation, all available through score matching and Stochastic Differential Equations (SDEs)! Paper: https://t.co/mwddOr3AA3
0
0
6
@ArashVahdat
Arash Vahdat
5 years
It breaks my 💚 when researchers tell me that VAEs don't work. My first typical question is "did you try hierarchial VAE or vanilla VAE?", the answer is usually vanilla VAE. VAEs work much better with hierarchical structures. NVAEs and this work take this to the extreme!
@hardmaru
hardmaru
5 years
Very Deep VAEs Generalize Autoregressive Models and Can Outperform Them on Images “Very Deep VAEs” achieve higher likelihoods, use fewer parameters, generate samples 1000x faster, and are more easily applied to hi-res images, compared to PixelCNN. https://t.co/bjjKrZvnb2
8
90
567
@spolu
Stanislas Polu
5 years
Posted my first paper on arXiv💥🙌 GPT-f is a Transformer-based automated theorem prover. We show that Transformer + Search is suitable to formal reasoning and continuous self-improvement 🦾 https://t.co/VllDcCV3Kc
17
187
870
@rewonfc
rewon
5 years
Our base model is a Sparse Transformer. If we make it bigger and train for a while with this augmentation, it results in both very high likelihoods (2.55-2.65 bpd on CIFAR-10) and also samples equal/better than most GANs (as measured by FID). Code here:
Tweet card summary image
github.com
Code for the paper, "Distribution Augmentation for Generative Modeling", ICML 2020. - openai/distribution_augmentation
1
2
9
@rewonfc
rewon
5 years
Thanks if you came to our ICML poster on Distribution Augmentation. The zoom discussion was way more fun/interesting than I expected! TLDR of our work: use powerful data aug in your generative model by conditioning it on the aug. Improves samples + likelihoods considerably.
2
9
48
@xuenay
Kaj Sotala
5 years
I keep seeing all kinds of crazy reports about people's experiences with GPT-3, so I figured that I'd collect a thread of them.
33
857
3K
@markchen90
Mark Chen
5 years
Excited to share what I've been working on with @AlecRad, @rewonfc, @ilyasut and others!
@OpenAI
OpenAI
5 years
We found that just as a large transformer model trained on language can generate coherent text, the same exact model trained on pixel sequences can generate coherent image completions and samples. https://t.co/whREMuBvxx
3
27
100
@mcleavey
Christine McLeavey
6 years
Hot Tub Christmas (with GPT-2 lyrics selected by @rewonfc ) is my own favorite -- even though the model can't figure out what's going on in the completely terrible intro. Our weird but fun new holiday tradition!
1
8
27
@rewonfc
rewon
6 years
One of my favorites from @OpenAI's jukebox: 'Lose Yourself' re-rendered by Kanye https://t.co/PlYKbaEjon
1
6
53