Hongyu Ren Profile
Hongyu Ren

@ren_hongyu

Followers
6,315
Following
638
Media
20
Statuses
169

Research Scientist @openai . CS PhD @stanford . Previously @apple , @googleai and @nvidiaai . I train language models.

Stanford, CA
Joined April 2018
Don't wanna be here? Send us removal request.
Explore trending content on Musk Viewer
Pinned Tweet
@ren_hongyu
Hongyu Ren
12 days
Thrilled to release o1-mini, a model near and dear to my heart 💙. o1-mini is an efficient model in the o1 series that’s super performant in STEM reasoning, especially math and coding. I can’t wait to see what you all build with o1-mini!!
17
26
303
@ren_hongyu
Hongyu Ren
2 years
Excited to be speaking at the Stanford Graph Learning Workshop! Free registration: More info:
@jure
Jure Leskovec
2 years
Excited to announce 2nd Stanford Graph Learning Workshop on Wed Sept 28th with leaders from academia and industry to showcase recent advances of Graph Representation Learning across a wide range of applications. Program & free registration:
Tweet media one
4
159
752
0
34
294
@ren_hongyu
Hongyu Ren
6 months
做了一点微小的贡献🀄️
@OpenAI
OpenAI
6 months
Our new GPT-4 Turbo is now available to paid ChatGPT users. We’ve improved capabilities in writing, math, logical reasoning, and coding. Source:
Tweet media one
657
1K
7K
12
3
177
@ren_hongyu
Hongyu Ren
10 months
💕
@sama
Sam Altman
10 months
i love the openai team so much
5K
4K
72K
3
7
156
@ren_hongyu
Hongyu Ren
2 months
I have been working on 4o mini 🚀, our most intelligent and efficient model. Hope it can unlock a lot more downstream applications, check it out!!
@OpenAIDevs
OpenAI Developers
2 months
Introducing GPT-4o mini! It’s our most intelligent and affordable small model, available today in the API. GPT-4o mini is significantly smarter and cheaper than GPT-3.5 Turbo.
Tweet media one
164
616
3K
15
5
134
@ren_hongyu
Hongyu Ren
5 years
Excited to share our #ICLR2020 paper and code for Query2box, a multi-hop reasoning framework on knowledge graphs. We design box embeddings to answer complex logical queries with conjunction and disjunction. Joint work with @weihua916 and @jure .
Tweet media one
2
22
116
@ren_hongyu
Hongyu Ren
3 years
🚨Glad to share that Combiner is accepted to #NeurIPS2021 as a spotlight! Combiner achieves full attention with subquadratic complexity. Joint work with amazing folks @hanjundai @ZihangDai @mengjiao_yang @jure Dale Schuurmans @daibond_alpha @GoogleAI @StanfordAILab img from wiki
Tweet media one
1
18
75
@ren_hongyu
Hongyu Ren
4 years
New #NeurIPS2020 paper on multi-hop reasoning on knowledge graphs! joint work with @jure @StanfordAILab Paper: Website: Code (BetaE, Query2box, GQE):
Tweet media one
1
10
55
@ren_hongyu
Hongyu Ren
2 years
Come join us for the tutorial @ Learning on Graph: Complex Reasoning over Relational Database, we will present SMORE + Scallop! @StanfordAILab @CIS_Penn @GoogleAI @LogConference Time: 9-1030am PT, Dec. 10 Webpage: (Attendees will get a virtual cookie👇)
Tweet media one
0
14
47
@ren_hongyu
Hongyu Ren
10 months
I will be flying to #NeurIPS2023 tomorrow ✈️ and stay until Saturday. DM me if you want to chat about #LLM , reasoning and beyond!
1
0
40
@ren_hongyu
Hongyu Ren
6 days
o1-mini and preview is #1 on math!
@lmsysorg
lmsys.org
6 days
No more waiting. o1's is officially on Chatbot Arena! We tested o1-preview and mini with 6K+ community votes. 🥇o1-preview: #1 across the board, especially in Math, Hard Prompts, and Coding. A huge leap in technical performance! 🥈o1-mini: #1 in technical areas, #2 overall.
Tweet media one
52
277
2K
1
0
40
@ren_hongyu
Hongyu Ren
10 months
Simply would like to mark this historic moment. The past five days have been a roller coaster. Happy early thanksgiving! 🦃❤️☺️
@OpenAI
OpenAI
10 months
We have reached an agreement in principle for Sam Altman to return to OpenAI as CEO with a new initial board of Bret Taylor (Chair), Larry Summers, and Adam D'Angelo. We are collaborating to figure out the details. Thank you so much for your patience through this.
6K
13K
66K
0
0
33
@ren_hongyu
Hongyu Ren
11 days
Check out the clip for a preview of behind-the-scene stories of 🍓
@OpenAI
OpenAI
11 days
Some of our researchers behind OpenAI o1 🍓
221
834
7K
2
0
32
@ren_hongyu
Hongyu Ren
10 months
Ordered some New Orleans chicken wings before heading to New Orleans
Tweet media one
@ren_hongyu
Hongyu Ren
10 months
I will be flying to #NeurIPS2023 tomorrow ✈️ and stay until Saturday. DM me if you want to chat about #LLM , reasoning and beyond!
1
0
40
1
0
30
@ren_hongyu
Hongyu Ren
4 years
Come check our #uai paper on task inference for meta-reinforcement learning. We propose OCEAN, an online task inference framework that models tasks with global and local context variables. Joint work with @AnimaAnandkumar @animesh_garg @yukez @jure . 👉
Tweet media one
1
4
26
@ren_hongyu
Hongyu Ren
11 days
@legit_rumors @OpenAIDevs - OpenAI o1-mini is optimized for STEM applications at all stages of training & data. It has limitations of world knowledge. Check our research blog post for more details. - We are working on adding more knowledge. Stay tuned for next version of o1-mini!
0
2
26
@ren_hongyu
Hongyu Ren
2 years
Check out our new blog post for a summarization and outlook of the graph ML & geometric deep learning in 2022 and 2023. Happy new year! 🧨☃️🎄 👉 link:
@michael_galkin
Michael Galkin
2 years
🎄It's 2023! In a new post, we provide an overview of Graph ML and its subfields (and hypothesize for '23), eg Generative Models, Physics, PDEs, Theory, KGs, Algorithmic Reasoning, and more! With @ren_hongyu @zhu_zhaocheng @chrsmrrs and @jo_brandstetter
1
115
425
0
2
25
@ren_hongyu
Hongyu Ren
2 years
Happy to release DRAGON, a foundation model joint trained over texts and knowledge bases. Check out GreaseLM too for the FUSION layer at the core!
@michiyasunaga
Michi Yasunaga
2 years
Excited to share our #NeurIPS2022 paper “DRAGON: Deep Bidirectional Language-Knowledge Graph Pretraining”, w/ the amazing @ABosselut @ren_hongyu @xikun_zhang_ @chrmanning @percyliang @jure @StanfordAILab ! Paper: Code: 🧵👇[1/6]
Tweet media one
4
74
289
0
2
23
@ren_hongyu
Hongyu Ren
3 years
Check out our new scalable KG embedding framework! It supports super efficient training of single/multi-hop algorithms on extremely large graphs (~86m nodes), including GQE, Query2box, BetaE, TransE, RotatE, DistMult, ComplEx and so on!🥳
Tweet media one
@jure
Jure Leskovec
3 years
Excited to share our collaboration with @GoogleAI : SMORE is a scalable knowledge graph completion and multi-hop reasoning system that scales to hundreds of millions of entities and relations. @ren_hongyu , @hanjundai , et al.
Tweet media one
2
76
403
0
5
23
@ren_hongyu
Hongyu Ren
3 years
👉Check out Stanford Graph Learning workshop on Sep. 16! Super excited to present recent advances on Knowledge Graphs. Free registration at: , it will be live streamed!
@jure
Jure Leskovec
3 years
Stanford is proud to bring together leaders from academia&industry to showcase advances in Graph Neural Networks. Program includes applications, frameworks and industry panels on challenges of graph-based machine learning models. Register at:
Tweet media one
11
221
897
0
3
23
@ren_hongyu
Hongyu Ren
4 years
New #NeurIPS2020 paper on Graph Neural Nets #GNN , Representation Learning, Robustness! "Graph Information Bottleneck" w/ @tailintalent , Pan Li, @jure @StanfordAILab @PurdueCS Website: Paper: Code: (1/n)
Tweet media one
1
7
20
@ren_hongyu
Hongyu Ren
11 days
@dvyio @OpenAIDevs OpenAI o1-mini is optimized for STEM applications at all stages of training & data. It has limitations of world knowledge. Check our research blog post for more details.
0
1
19
@ren_hongyu
Hongyu Ren
3 years
👉Check out our new work QA-GNN, a new commonsense reasoning model using language models GNNs, and knowledge graphs. #NAACL2021
@michiyasunaga
Michi Yasunaga
3 years
Excited to share our #NAACL2021 paper "QA-GNN: Reasoning with Language Models and Knowledge Graphs for Question Answering"! Joint work with @ren_hongyu @ABosselut @percyliang @jure @StanfordAILab Paper: Github: Thread below [1/6]
Tweet media one
1
48
187
0
6
19
@ren_hongyu
Hongyu Ren
1 year
🤓The project took us months of conceptualization and refinement, and we are glad to finally release it – look forward to hearing both from Graph ML and Database folks! Check out the blogpost from @michael_galkin for more details :) 10/10, n=10
1
4
17
@ren_hongyu
Hongyu Ren
2 years
Check out Connection Subgraph Reasoner accepted at #NeurIPS2022 , a novel pretraining subgraph-level objective for few-shot KG link prediction. Paper: Code:
@qhwang3
Qian Huang
2 years
Excited to share our #NeurIPS2022 paper: Few-shot Relational Reasoning via Connection Subgraph Pretraining! . We propose to pretrain on subgraph matching for few-shot relational reasoning tasks. 🧵 Joint work with @ren_hongyu and @jure ! @StanfordAILab
Tweet media one
4
32
218
0
7
17
@ren_hongyu
Hongyu Ren
11 days
@InverseAGI @OpenAIDevs OpenAI o1-mini is optimized for STEM applications at all stages of training & data. It has limitations of world knowledge. Check our research blog post for more details.
0
3
16
@ren_hongyu
Hongyu Ren
2 years
How do we answer complex queries in an inductive setting? Check out this new #NeurIPS2022 paper with a novel method and a set of benchmarks for inductive multi-hop reasoning. This guy makes cool gifs!!
@michael_galkin
Michael Galkin
2 years
Our new #NeurIPS2022 work on inductive query answering - new nodes at inference, complex queries over incomplete graphs, no node features, seems impossible? 🤔 With @zhu_zhaocheng @ren_hongyu @tangjianpku Paper: Code: 🧵1/10
2
9
58
0
4
16
@ren_hongyu
Hongyu Ren
7 months
I simply cannot believe the samples generated by #Sora , they are really not cherry picked 🤯
2
0
16
@ren_hongyu
Hongyu Ren
4 years
Come check our BetaE paper today starting 21pm PT at Town E1 - Spot B1! @jure
@ren_hongyu
Hongyu Ren
4 years
New #NeurIPS2020 paper on multi-hop reasoning on knowledge graphs! joint work with @jure @StanfordAILab Paper: Website: Code (BetaE, Query2box, GQE):
Tweet media one
1
10
55
0
2
15
@ren_hongyu
Hongyu Ren
2 years
📢Excited to give a talk about SMORE, the first scalable framework that supports link prediction and multi-hop reasoning over massive knowledge graphs! Code: Tuesday 10am ET Room 206 #KDD2022
Tweet media one
0
0
13
@ren_hongyu
Hongyu Ren
11 days
Join the AMA session at 10am PT today! I will answer your questions.
@OpenAIDevs
OpenAI Developers
11 days
We’re hosting an AMA for developers from 10–11 AM PT today. Reply to this thread with any questions and the OpenAI o1 team will answer as many as they can.
483
129
1K
0
1
12
@ren_hongyu
Hongyu Ren
11 days
☺️
@_jasonwei
Jason Wei
11 days
o1-mini is the most surprising research result i've seen in the past year obviously i cannot spill the secret, but a small model getting >60% on AIME math competition is so good that it's hard to believe congrats @ren_hongyu @shengjia_zhao for the great work!
35
94
2K
0
0
12
@ren_hongyu
Hongyu Ren
1 year
Explain superconductors! 🤓
Tweet media one
@OpenAI
OpenAI
1 year
We’re rolling out a bunch of small updates to improve the ChatGPT experience. Shipping over the next week: 1. Prompt examples: A blank page can be intimidating. At the beginning of a new chat, you’ll now see examples to help you get started. 2. Suggested replies: Go deeper with
581
1K
6K
0
0
12
@ren_hongyu
Hongyu Ren
1 year
Why do we need NGDBs and what do graph DBs lack? The biggest motivation is incompleteness - symbolic SPARQL/Cypher-like engines can’t cope with incomplete graphs at scale. Neural graph reasoning, however, is already mature enough to work in large and noisy incomplete graphs. 2/n
Tweet media one
1
2
10
@ren_hongyu
Hongyu Ren
1 year
Broadly, NGDBs are equipped to answer both “what is there?” and “what is missing?” queries whereas standard graph DBs are limited to traversal-only scenarios assuming the graph is complete. 4/n
Tweet media one
1
1
9
@ren_hongyu
Hongyu Ren
1 year
✍️Check out our github where we collect all relevant references, feel free to send PRs - we welcome all contributions :) We’ll be updating other materials on the project website . 9/n
1
1
9
@ren_hongyu
Hongyu Ren
7 months
Absolutely stunning! 💫
@OpenAI
OpenAI
7 months
Introducing Sora, our text-to-video model. Sora can create videos of up to 60 seconds featuring highly detailed scenes, complex camera motion, and multiple characters with vibrant emotions. Prompt: “Beautiful, snowy
10K
32K
138K
0
0
8
@ren_hongyu
Hongyu Ren
11 days
@felixchin1 @OpenAIDevs The weekly rate limit is 50 for o1-mini. We are working to increase those rates and enable ChatGPT to automatically choose the right model for a given prompt!
1
0
8
@ren_hongyu
Hongyu Ren
6 months
GPT4 is smarter!!
@gdb
Greg Brockman
6 months
Some coding benchmark results for our new GPT-4 Turbo:
69
116
2K
0
0
8
@ren_hongyu
Hongyu Ren
1 year
What are NGDBs? While their architecture might look similar to traditional DBs with Storage and Query Engine, the essential difference is in ditching symbolic edge traversal and answering queries in the latent space (including logical operators). 3/n
Tweet media one
1
1
8
@ren_hongyu
Hongyu Ren
12 days
It was an extremely wild and rewarding journey to train and babysit the model from random init to the final ckpt with the best team possible. Had sooo much fun experiencing all the TOO GOOD TO BE TRUE moments with @shengjia_zhao @_kevinlu @Eric_Wallace_ @LiamFedus @_aidan_clark_ !
0
0
7
@ren_hongyu
Hongyu Ren
1 year
Thanks to GNNs, NGDBs can answer either the entire query graph at once or execute them sequentially. We don’t need any explicit indexes - the latent space formed by a neural encoder *is* the single uniform index. 5/n
Tweet media one
1
2
7
@ren_hongyu
Hongyu Ren
1 year
Finally, we outline challenges and open problems for NGDBs. Lots of cool stuff to work on! (especially if you are in existential crisis with GPT4, in fact how to design LLM interface for NGDB and how to let NGDBs help compress and accelerate LLMs are very promising) #GPT4 8/n
1
1
6
@ren_hongyu
Hongyu Ren
1 year
What’s the difference between NGDBs and vector DBs? Vector DBs are fast and encoder-independent consuming embeddings from all kinds of models. Though they are limited to a few distance functions and lack query answering capabilities, they fit well into the NGDB blueprint! 6/n
Tweet media one
1
1
6
@ren_hongyu
Hongyu Ren
1 year
In the NGDB framework, we create a taxonomy and survey 40+ neural graph reasoning models that can potentially serve as Neural Query Engines under 3 main categories: Graphs (theory and expressiveness), Modeling (graph learning), and Queries (what can we answer). 7/n
Tweet media one
1
1
6
@ren_hongyu
Hongyu Ren
11 days
@actualrealyorth @OpenAIDevs We are looking into this and building better infra to support our api customers and developers.
1
0
5
@ren_hongyu
Hongyu Ren
4 years
Check out OGB for large-scale graph ML datasets!
@weihua916
Weihua Hu
4 years
Super excited to share Open Graph Benchmark (OGB)! OGB provides large-scale, diverse graph datasets to catalyze graph ML research. The datasets are easily accessible via OGB Python package with unified evaluation protocols and public leaderboards. Paper:
Tweet media one
2
112
408
0
0
4
@ren_hongyu
Hongyu Ren
1 year
@jure Congrats Jure!! It is my honor to have you as my advisor 👏
0
0
4
@ren_hongyu
Hongyu Ren
12 days
amazing video!
@hwchung27
Hyung Won Chung
12 days
“how many r’s in strawberry?” I had to ask this to demo our new model o1-preview 😎 LLMs process text at a subword level. A question that requires understanding the notion of both character and word confuses them. OpenAI o1-preview "thinks harder" to avoid mistakes.
Tweet media one
5
7
93
0
0
4
@ren_hongyu
Hongyu Ren
3 years
(2/x) The key idea is to factorize the conditional expectation formulation of attention in a structured way. Each token can attend to all the other tokens either by direct attention, or indirectly via a "proxy" that summarizes a local region. Paper link:
1
2
4
@ren_hongyu
Hongyu Ren
5 years
@weihua916 @jure We show that embedding queries as hyper-rectangles (box) is a natural way to handle relational projection and conjunction. We also analyze the hardness of KG embeddings handling disjunction and provide a clean solution using disjunctive normal form.
1
1
4
@ren_hongyu
Hongyu Ren
4 years
We embed queries as Beta distributions. For conjunctions, we take weighted product of the PDF of the Beta embeddings of input queries. For negation, we calculate the reciprocal of the parameters of the input so that the high density region will be become low and vice versa.
Tweet media one
1
0
3
@ren_hongyu
Hongyu Ren
4 years
@tailin_wu and I will present Graph Information Bottleneck from 9pm to 11pm PT today! #NeurIPS2020 Come chat with us on graphs, representation learning, GNNs. Website: Gathertown: @jure @StanfordAILab @PanLi90769257 @PurdueCS
@ren_hongyu
Hongyu Ren
4 years
New #NeurIPS2020 paper on Graph Neural Nets #GNN , Representation Learning, Robustness! "Graph Information Bottleneck" w/ @tailintalent , Pan Li, @jure @StanfordAILab @PurdueCS Website: Paper: Code: (1/n)
Tweet media one
1
7
20
0
2
3
@ren_hongyu
Hongyu Ren
4 years
Check out query2box, joint work with @weihua916 and @jure . Query2box performs multi-hop logical reasoning on knowledge graphs. #ICLR2020 #iclr Talk: Website: @weihua916 and I will present the work at 10 pm Tue. and 1pm Wed. PT.
0
0
3
@ren_hongyu
Hongyu Ren
3 years
(5/x) It's a direct drop-in replacement of the attention layers and we will release the implementation soon!
0
0
2
@ren_hongyu
Hongyu Ren
3 years
@thodrek @ihabilyas @jure @StanfordAILab Thank you so much Theo, look forward to the collaboration!
0
0
2
@ren_hongyu
Hongyu Ren
4 years
The embedding also captures the uncertainty of a query, measured by the number of answers it has. We found a natural connection between the entropy of the Beta embedding and # answers, without any training needed to enforce this correlation.
0
0
2
@ren_hongyu
Hongyu Ren
3 years
(3/x) We propose six instantiations of Combiner idea with different factorization patterns. And we show we can take a prior sparse Transformer X and make it Combiner-X with full attention capacity while keeping the same asymptotic complexity!
1
0
2
@ren_hongyu
Hongyu Ren
1 year
@weihua916 @jure Congrats Weihua!!
0
0
2
@ren_hongyu
Hongyu Ren
4 years
poster session starting soon at 7pm PT.
0
0
2
@ren_hongyu
Hongyu Ren
4 years
The task is answering first-order logic queries (with existential, conjunction, disjunction and negation) on incomplete KGs. Key insight is to embed the queries and entities, and reason in the embedding space. Previous methods (GQE, Q2B..) cannot handle negation/set complement.
1
0
2
@ren_hongyu
Hongyu Ren
3 years
(4/x) We evaluate Combiner on autoregressive / bidirectional sequence modeling across images and texts. Combiner achieves much better performance across a wide range of tasks and is super scalable.
1
0
2
@ren_hongyu
Hongyu Ren
11 days
@felixchin1 @OpenAIDevs Yes. All prompts are counted the same in ChatGPT.
0
0
2
@ren_hongyu
Hongyu Ren
4 years
The model achieves strong robustness against adversarial attacks on both node features and graph structure! (4/n)
Tweet media one
1
0
1
@ren_hongyu
Hongyu Ren
6 months
@ypatil125 Congrats Yash!!
0
0
1
@ren_hongyu
Hongyu Ren
2 years
@michael_galkin @Mila_Quebec @intel Congrats! Enjoy California 🏖️
0
0
1
@ren_hongyu
Hongyu Ren
5 years
How do generative models generalize and memorize? Check out the blog for a systematic framework!
@StefanoErmon
Stefano Ermon
5 years
If all training images for a GAN/VAE/PixelCNN have 2 objects, will they only generate images with 2 objects? If trained on (🔵,💙,🔴), will they also generate ❤️? Find out in @shengjia_zhao 's blog post on generalization and bias for generative models. 👉
1
134
515
0
0
1