lilianweng Profile Banner
Lilian Weng Profile
Lilian Weng

@lilianweng

Followers
146K
Following
136
Media
13
Statuses
191

Co-founder of Thinking Machines Lab @thinkymachines; Ex-VP, AI Safety & robotics, applied research @OpenAI; Author of Lil'Log

Joined December 2009
Don't wanna be here? Send us removal request.
@lilianweng
Lilian Weng
11 days
Giving your models more time to think before prediction, like via smart decoding, chain-of-thoughts reasoning, latent thoughts, etc, turns out to be quite effective for unblocking the next level of intelligence. New post is here :) . “Why we think”:
52
413
3K
@lilianweng
Lilian Weng
7 months
After working at OpenAI for almost 7 years, I decide to leave. I learned so much and now I'm ready for a reset and something new. Here is the note I just shared with the team. 🩵
Tweet media one
270
346
6K
@lilianweng
Lilian Weng
2 years
About 650 / 770 signed at this moment. As people start waking up, more will come. All the efforts started after 1:30 AM, 500+ within two hours and all of this after 2 crazy days with very little sleep.
154
564
5K
@lilianweng
Lilian Weng
2 years
Agent = LLM + memory + planning skills + tool use. This is probably just a start of a new era :).
103
760
4K
@lilianweng
Lilian Weng
2 years
Just had a quite emotional, personal conversation w/ ChatGPT in voice mode, talking about stress, work-life balance. Interestingly I felt heard & warm. Never tried therapy before but this is probably it? Try it especially if you usually just use it as a productivity tool.
@sama
Sam Altman
2 years
voice mode and vision for chatgpt! really worth a try.
505
218
3K
@lilianweng
Lilian Weng
6 months
🦃 At the end of Thanksgiving holidays, I finally finished the piece on reward hacking. Not an easy one to write, phew. Reward hacking occurs when an RL agent exploits flaws in the reward function or env to maximize rewards without learning the intended behavior. This is imo a.
67
227
2K
@lilianweng
Lilian Weng
2 years
🛠 New posts on Prompt Engineering: Steer a large pretrained language model to do what you want wo/ updating the model weights. Most importantly this just introduces general ideas, but for your own problem, you always need tuning and experimentation.
34
360
2K
@lilianweng
Lilian Weng
6 years
😊Self-supervised learning opens up a huge opportunity for better utilizing unlabelled data while learning in a supervised learning manner. My latest post covers many interesting ideas of self-supervised learning tasks on images, videos & control problems:
11
434
2K
@lilianweng
Lilian Weng
6 years
It has been a long journey for us. There were moments when I felt disappointed or almost hopeless, but the progress we have made, together as a team, is credible. We made it through and made it happen. Check it out!!.
@OpenAI
OpenAI
6 years
We've trained an AI system to solve the Rubik's Cube with a human-like robot hand. This is an unprecedented level of dexterity for a robot, and is hard even for humans to do. The system trains in an imperfect simulation and quickly adapts to reality:
22
251
2K
@lilianweng
Lilian Weng
5 years
Since the paper "Attention Is All You Need", so many new things have happened to improve the Transformer model, e.g. to make the attention span longer, to reduce memory & compute cost, etc. That's what my post is about - 🤓.
11
365
2K
@lilianweng
Lilian Weng
5 years
Exploration strategies in deep RL are such a critical topic. I almost immediately regretted it when I started writing on this big subject because it has so much more content than I expected. But here it comes, phew:.
25
324
2K
@lilianweng
Lilian Weng
2 years
OpenAI is nothing without its people.
31
60
1K
@lilianweng
Lilian Weng
3 months
This is something we have been cooking together for a few months and I'm very excited to announce it today. Thinking Machines Lab is my next adventure and I'm feeling very proud and lucky to start it with a group of talented colleagues. Learn more about our vision at.
@thinkymachines
Thinking Machines
3 months
Today, we are excited to announce Thinking Machines Lab (, an artificial intelligence research and product company. We are scientists, engineers, and builders behind some of the most widely used AI products and libraries, including ChatGPT,.
85
63
1K
@lilianweng
Lilian Weng
4 years
Diffusion models are another type of generative models, besides GAN, VAE, and flow models. The idea is quite smart and clean. It is flexible enough to model any complex distribution while remains tractable to evaluate the distribution.
9
271
1K
@lilianweng
Lilian Weng
4 years
Contrastive learning aims to learn representation such that similar samples stay close, while dissimilar ones are far apart. It can be applied to supervised / unsupervised data and has been shown to achieve good results on various tasks. 📚 A long read:
13
274
1K
@lilianweng
Lilian Weng
2 years
🦖Large Transformers are powerful but expensive to train & use. The extremely high inference cost is a big bottleneck for adopting them for solving real-world tasks at scale. Check out my new post on some ideas on inference optimization for Transformers:
14
261
1K
@lilianweng
Lilian Weng
6 years
If you are, like me, often ponder over the question of why DNN with so many parameters can generalize without severe overfitting, check this out:
16
372
1K
@lilianweng
Lilian Weng
2 years
Feeling a bit intimidating to write about it but work on attacks can lead to good insights for mitigation. Plan to write about mitigation work separately later. Also want to thank all the researchers who shared disclosure reports w/ us so far. 🙏🙏🙏.
23
206
1K
@lilianweng
Lilian Weng
11 months
Wrote about extrinsic hallucinations during the July 4th break. Here is what ChatGPT suggested as a fun tweet for the blog:. 🚀 Dive into the wild world of AI hallucinations! .🤖 Discover how LLMs can conjure up some seriously creative (and sometimes.
31
199
1K
@lilianweng
Lilian Weng
4 years
I've read so many papers with small incremental changes that can be well summarized in one sentence. I wish there is a better way to share incremental improvements & corresponding experimental results. They are interesting and valuable but a full 10-page paper seems too long.
49
72
1K
@lilianweng
Lilian Weng
2 years
❤️.
@sama
Sam Altman
2 years
i love the openai team so much.
20
22
1K
@lilianweng
Lilian Weng
2 years
Coding calms me down when I’m depressed or anxious, so coding with copilot is like doing a therapy.
46
63
1K
@lilianweng
Lilian Weng
1 year
🎨Spent some time refactoring the 2021 post on diffusion model with new content: ⬇️.⬇️.⬇️.🎬Then another short piece on diffusion video models: (Yes, I had an intensive weekend🥹).
30
171
1K
@lilianweng
Lilian Weng
3 years
🧮 I finally spent some time learning what exactly Neural Tangent Kernel (NTK) is and went through some mathematical proof. Hopefully after reading this, you will not feel all the math behind NTK is that scaring, but rather, quite intuitive.
13
182
1K
@lilianweng
Lilian Weng
3 years
Updated this 1-year old post on diffusion models with some new content based on recent progresses - including classifier-free guidance, GLIDE, unCLIP, Imagen and latent diffusion model.
@lilianweng
Lilian Weng
4 years
Diffusion models are another type of generative models, besides GAN, VAE, and flow models. The idea is quite smart and clean. It is flexible enough to model any complex distribution while remains tractable to evaluate the distribution.
14
128
966
@lilianweng
Lilian Weng
4 years
Training large models demands a lot of GPU memory and a long training time. With several training parallelism strategies and a variety of memory saving designs, it is possible to train very large neural networks across many GPUs.
5
167
888
@lilianweng
Lilian Weng
5 years
As a person who is working in Robotics but has no academic training in Robotics, I found this "Modern Robotics" course is super interesting: You also need the book:
8
141
865
@lilianweng
Lilian Weng
8 months
📢 We are hiring Research Scientists and Engineers for safety research at @OpenAI, ranging from safe model behavior training, adversarial robustness, AI in healthcare, frontier risk evaluation and more. Please fill in this form if you are interested:
20
73
768
@lilianweng
Lilian Weng
5 years
Humans learn from curriculum since birth. We can learn complicated math problems because we have accumulated enough prior knowledge. This could be true for training a ML/RL model as well. Let see how curriculum can help an RL agent learn:
5
156
762
@lilianweng
Lilian Weng
4 years
If you are interested in developing innovative solutions for real-world machine learning problems and deploying cutting-edge deep learning techniques via our API product to benefit the public, check this out and come join us!
11
136
734
@lilianweng
Lilian Weng
2 years
🚜 Cannot believe it is almost 3 years since my 2020 post on variations of Transformer. I spent some time and did a big refactoring of that old post with new section structure and new papers. Still missing a few items tho, will add them in slowly:
12
119
754
@lilianweng
Lilian Weng
1 year
🗣️I've been thinking about data quality & human factor in the process a lot lately, so write a short post on the topic: More: If you are into the topic, my team is hiring Research Engineer for a new sub-team Human-AI Interaction:
22
101
753
@lilianweng
Lilian Weng
5 years
Although popular and successful model architectures are mostly designed by human experts, it doesn't mean we have settled down with the best option. Neural Architecture Search (NAS) automates network architecture engineering in a more systematic way.
10
156
695
@lilianweng
Lilian Weng
2 years
Rebirth.
@OpenAI
OpenAI
2 years
We have reached an agreement in principle for Sam Altman to return to OpenAI as CEO with a new initial board of Bret Taylor (Chair), Larry Summers, and Adam D'Angelo. We are collaborating to figure out the details. Thank you so much for your patience through this.
21
15
604
@lilianweng
Lilian Weng
6 years
Meta RL is a great idea 💡: After trained over a distribution of tasks, an agent is able to solve a new task by developing a new RL algorithm with its internal dynamics. Check my latest blog post if interested:
2
119
603
@lilianweng
Lilian Weng
6 years
Gradient descent is not the only option when optimizing model parameters. Evolution strategies can help too. Check out my new post if you are interested in how CMA-ES works or the way ES is used in deep RL:
9
136
592
@lilianweng
Lilian Weng
2 months
👩‍🍳Actively cooking the next blog post. Tiny teaser: It is spiritually related to our new company.
16
11
599
@lilianweng
Lilian Weng
4 years
The first post to start 2021💡: How to steer a powerful unconditioned language model to output what we want? It is still a challenging open research question. There are some ways although still only in limited domains.
9
95
559
@lilianweng
Lilian Weng
6 years
My new post on language models examined how word embedding evolved from context-agnostic to context-dependent, as well as the new trend in large unsupervised pre-trained language models which have achieved amazing SOTA results on various end tasks.
5
151
561
@lilianweng
Lilian Weng
9 months
🍓 Finally o1 is out - our first model with general reasoning capabilities. Not only it achieves impressive results on hard, scientific tasks, but also it gets significantly improved on safety and robustness. We found reasoning in context about safety
Tweet media one
Tweet media two
20
56
563
@lilianweng
Lilian Weng
16 days
When a new dataset comes out, I get excited and check it out and then only realize that this is another meta-mixed dataset combining a collections of other existing datasets. My brain immediately acts like "oh fork . contamination!" No meta-meta-mixed dataset plzzzz :lolsob:.
26
26
580
@lilianweng
Lilian Weng
2 years
seriously considering writing about corporate governance structure in my next blog post.
@LiamFedus
William Fedus
2 years
Corporate Governance course enrollment up +7000%.
30
11
540
@lilianweng
Lilian Weng
2 years
🤷‍♀️.
@jasonkwon
Jason Kwon
2 years
Tweet media one
15
13
530
@lilianweng
Lilian Weng
7 years
If you are interested in learning RL, especially Deep RL, but not sure where to start, check out this post I wrote earlier this year:
7
176
536
@lilianweng
Lilian Weng
1 year
I’ve started using the similar function during my Japan trip like translating my conversation with a sushi chef or teaching different types of rocks in a souvenir store. The utility is on an another level. Proud to be part of it. ❤️. Tip: You need to interrupt the ChatGPT voice.
@OpenAI
OpenAI
1 year
Say hello to GPT-4o, our new flagship model which can reason across audio, vision, and text in real time: Text and image input rolling out today in API and ChatGPT with voice and video in the coming weeks.
28
38
517
@lilianweng
Lilian Weng
7 years
Here is my new blog post on flow-based deep generative models. Different from GAN or VAE, these model explicitly learn the probability density function of the real data using normalizing flows.
5
134
477
@lilianweng
Lilian Weng
3 years
My new post looks into various methods on how to extend a pre-trained foundation language model to be capable of consuming visual signals; in other words, transform a pretrained LM into a VLM to resolve vision language tasks.
8
84
463
@lilianweng
Lilian Weng
3 years
Part 2 of “what if you don’t have enough training data” series on active learning. When the labeling budget is limited or labeling cost is very high, active learning comes handy to select the most valuable samples to label next.
5
78
454
@lilianweng
Lilian Weng
3 years
The performance of supervised learning tasks improves with more high-quality labels. However it is expensive to collect many such labels. Semi-supervised learning is one of the paradigms for dealing with label scarcity:
7
94
443
@lilianweng
Lilian Weng
3 years
Part 3 of “what if you don’t have enough training data” series - touch base on creating more synthetic data by data augmentation or model generation, as well as some ideas on how to work with noisy labels (given synthetic data might not be fully correct).
7
63
425
@lilianweng
Lilian Weng
2 years
(1/3) Alongside Superalignment team, my team is working on the practical side of alignment: Building systems to enable safe AI deployment. We are looking for strong research engineers and scientists to join the efforts.
@OpenAI
OpenAI
2 years
We need new technical breakthroughs to steer and control AI systems much smarter than us. Our new Superalignment team aims to solve this problem within 4 years, and we’re dedicating 20% of the compute we've secured to date towards this problem. Join us!
22
53
411
@lilianweng
Lilian Weng
1 year
Very interesting read. If we apply similar idea to build in a safe mode trigger, it can probably stay robust even after custom fine-tuning.
@AnthropicAI
Anthropic
1 year
New Anthropic Paper: Sleeper Agents. We trained LLMs to act secretly malicious. We found that, despite our best efforts at alignment training, deception still slipped through.
Tweet media one
15
37
392
@lilianweng
Lilian Weng
3 years
You can fine-tune a GPT model with your own dataset on our API now. It opens up all the new possibilities ;).
@OpenAI
OpenAI
3 years
Developers can now create a custom version of GPT-3 for their applications with a single command. Fine-tuning GPT-3 on your data improves performance for many use cases. See results👇
8
41
377
@lilianweng
Lilian Weng
4 years
Our paper on training a single goal-conditioned policy 100% with asymmetric self-play to generalize to many unseen objects and tasks: and more cool videos are available at (The attached video is zero-shot)
3
57
348
@lilianweng
Lilian Weng
1 year
We have various teams working on AI safety at OpenAI. Let us know if you are interested!.
@aleks_madry
Aleksander Madry
1 year
We're building several efforts at OpenAI: Preparedness, reliable AI deployment research, and AI security research. Up for chatting with us about these at NeurIPS? . Fill out this form (by Dec 1):
35
22
330
@lilianweng
Lilian Weng
8 months
🩵🩵🩵.
@miramurati
Mira Murati
8 months
I shared the following note with the OpenAI team today.
Tweet media one
5
9
337
@lilianweng
Lilian Weng
3 years
In case you not knowing this already, OpenAI API ( is open for immediate access now! No wait list any more! 🥳 🥳 🥳.
4
54
331
@lilianweng
Lilian Weng
4 years
Toxicity prevents us from safely deploying powerful pretrained language models for real-world applications. Check out for some work on toxic content detection and model detoxification.
5
65
332
@lilianweng
Lilian Weng
2 years
💛.
@ilyasut
Ilya Sutskever
2 years
I deeply regret my participation in the board's actions. I never intended to harm OpenAI. I love everything we've built together and I will do everything I can to reunite the company.
12
22
319
@lilianweng
Lilian Weng
2 years
GPT-4 is here! Our best model so far with strong steerability and safety improvement. API waitlist available. A true team effort - with extraordinary talents, strong belief & steady execution, we make things happen. So proud to be part of it. Have fun!.
10
31
312
@lilianweng
Lilian Weng
2 years
How people interact with AI models differs. Statements are just my personal take. 🙏.
51
5
305
@lilianweng
Lilian Weng
10 months
Rule-based rewards (RBRs) use model to provide RL signals based on a set of safety rubrics, making it easier to adapt to changing safety policies wo/ heavy dependency on human data. It also enables us to look at safety and capability in a more unified lens as a more capable.
@OpenAI
OpenAI
10 months
We’ve developed Rule-Based Rewards (RBRs) to align AI behavior safely without needing extensive human data collection, making our systems safer and more reliable for everyday use.
13
45
310
@lilianweng
Lilian Weng
1 year
More work coming up.& we are hiring:
@OpenAI
OpenAI
1 year
Introducing the Instruction Hierarchy, our latest safety research to advance robustness for prompt injections and other ways of tricking LLMs into executing unsafe actions. More details:
11
16
272
@lilianweng
Lilian Weng
6 years
One of the hardest problems in robotics is that models trained in simulator normally do not work on real robots. Domain randomization is a simple but powerful idea to close this sim2real gap:
5
61
268
@lilianweng
Lilian Weng
4 years
Given that the author has decided to withdraw the paper and the institute has started investigation, I decided to delete my last tweet but still want to emphasize: "We all know how important it is to keep our work original & innovative. NO plagiarism is the minimum standard.".
4
6
257
@lilianweng
Lilian Weng
6 years
Two most interesting papers I’ve found recently: “the lottery ticket hypothesis” (probably already very famous) and “adversarial examples are not bugs but features”
6
45
251
@lilianweng
Lilian Weng
1 year
Finally finished the book “The Righteous Minds” on Xmas day. An old one but classic. Moral systems feel magic since they suppress self-interest to make cooperative societies possible.
15
7
251
@lilianweng
Lilian Weng
4 years
We are glad to share a rich collection of Mujoco simulation environments for training robotic tasks, including the hand for solving Rubik's cube and a variety of object manipulation tasks on the table surface with one UR arm and gripper. Have fun!.
1
31
247
@lilianweng
Lilian Weng
6 years
Einstein Field Equations - for beginners! —- such a brilliant video, cannot stop watching.
2
37
244
@lilianweng
Lilian Weng
3 years
Check out this post with @gdb, if you are curious about how to train large deep learning models and you may find it is easier than you expected :) Also we are hiring!.
@OpenAI
OpenAI
3 years
Techniques for training large neural networks, by @lilianweng and @gdb:
7
29
246
@lilianweng
Lilian Weng
5 years
A system capable of answering any factual questions can enable many useful applications. My new post delves into how we can build an open-domain question answering model, with neural networks or with access to a powerful pre-trained language model.
4
41
242
@lilianweng
Lilian Weng
3 years
Procrastinating on the next blog post. ended up with hacking a emoji search tool, totally not optimized for perf/latency though: As an emoji lover, I hope you find it as fun I do :P
Tweet media one
15
25
237
@lilianweng
Lilian Weng
5 years
In special times like now, I find it easy to overwork and not get motivated to take PTO. But I finally did it, no travel, no plan and it was simply amazing. Finished a big painting (not original) and some casual readings. keep mind clear, you do need proper rest.
Tweet media one
5
9
231
@lilianweng
Lilian Weng
3 years
Together with @_jongwook_kim we will present a tutorial on self-supervised learning. See you soon 🙌.
@alfcnz
Alfredo Canziani
3 years
Join @lilianweng and @_jongwook_kim tonight (Mon 6 Dec ’21) at NeurIPS' «Self-Supervised Learning: Self-Prediction and Contrastive Learning» at 20:00 EST. @ermgrant and I will be entertaining you as session chairs.
Tweet media one
3
20
233
@lilianweng
Lilian Weng
3 years
So proud of the team! This new series of embedding models have amazing performance on clustering and search tasks. And they are accessible via OpenAI API.
@OpenAI
OpenAI
3 years
We're introducing embeddings, a new feature of our API that distills relationships between concepts, sentences, and even code in a simple numerical representation — for more powerful search, classification, and recommendations.
5
16
227
@lilianweng
Lilian Weng
1 month
See you at #ICLR2025 soon. Excited about chatting with many of you about Thinking Machines and what we have been up to!.
@thinkymachines
Thinking Machines
1 month
Thinking Machines is hosting a happy hour in Singapore during #ICLR2025 on Friday, April 25: Come eat, drink, and learn more about us!.
7
7
227
@lilianweng
Lilian Weng
29 days
Nope what’s that?.
@isafulf
Isa Fulford
29 days
me at the iclr openai recruiting event: . random man:.have you heard of arxiv?.
15
5
229
@lilianweng
Lilian Weng
2 years
40 cents per million tokens + cutting edge performance. Why not give our embeddings a try? 😌🐳.
@OpenAI
OpenAI
2 years
Our new embedding model is significantly more capable at language processing and code tasks, cost effective, and simpler to use.
12
20
216
@lilianweng
Lilian Weng
6 years
This is the Part 4 of my "Object Detection for Dummies" series, focusing on fast detection models. Well, I explicitly dropped "for dummies" in the name. For whoever reads up to part 4 - "for dummies" is not proper anymore.
2
46
205
@lilianweng
Lilian Weng
3 years
Looking for suggestions on my next post topic. A bit stuck right now 🤖🕵️🧚‍♀️👂.
86
11
193
@lilianweng
Lilian Weng
5 years
❤️
Tweet media one
3
2
179
@lilianweng
Lilian Weng
3 years
Now you can do more!.
@OpenAI
OpenAI
3 years
GPT-3 can now make changes to existing content, not just predict what comes next. Released in the API today:
4
10
172
@lilianweng
Lilian Weng
6 years
“. suffering is a perfectly natural part of getting a neural network to work . ” - only half way through but I’ve already laughed many times, so true to life. Thanks for reading these awesome and practical advice down.
@karpathy
Andrej Karpathy
6 years
New blog post: "A Recipe for Training Neural Networks" a collection of attempted advice for training neural nets with a focus on how to structure that process over time.
1
14
157
@lilianweng
Lilian Weng
2 years
Preparedness team, led by @aleks_madry, will focus on evaluation of and protection for catastrophic risks that might be triggered by AGI-level capability, including cybersecurity, bioweapon threats, persuasion and more. Come join us 💪 -
@OpenAI
OpenAI
2 years
We are building a new Preparedness team to evaluate, forecast, and protect against the risks of highly-capable AI—from today's models to AGI. Goal: a quantitative, evidence-based methodology, beyond what is accepted as possible:
11
25
159
@lilianweng
Lilian Weng
3 years
I need to hang this art on my wall. Who can resist those sad puppy eyes with sparks of curiosity 😳.
@AndrewMayne
Andrew Mayne
3 years
"a raccoon astronaut with the cosmos reflecting on the glass of his helmet dreaming of the stars". @OpenAI DALL-E 2
Tweet media one
1
11
154
@lilianweng
Lilian Weng
3 months
If you’re interested in working with us and helping build artificial intelligence that can think, beep and boop, consider applying here: 🩵🩵🩵.
5
1
138
@lilianweng
Lilian Weng
4 years
Try it out, my friends! :D.
@OpenAI
OpenAI
4 years
Welcome, @github Copilot — the first app powered by OpenAI Codex, a new AI system that translates natural language into code. Codex will be coming to the API later this summer.
3
14
147
@lilianweng
Lilian Weng
4 years
The future is now.
@OpenAI
OpenAI
4 years
Today's live demo of Codex, our AI that translates natural language to code:
2
14
138
@lilianweng
Lilian Weng
6 years
An interesting read on self-driving cars - - the complexity and engineering efforts involved are amazing.
1
24
147
@lilianweng
Lilian Weng
2 years
🧡.
@ilyasut
Ilya Sutskever
2 years
There exists no sentence in any language that conveys how happy I am:.
7
7
134
@lilianweng
Lilian Weng
4 years
Just finished the first episode. Very high quality interview and strongly recommend it :) Now stepping into the second episode woohoo.
@pabbeel
Pieter Abbeel
4 years
Second episode of The Robot Brains podcast is live now! I was lucky enough to sit down with Princeton Professor @orussakovsky and dive into many of the possible issues with the data powering AI systems and what led her to start @ai4allorg!.
3
11
135
@lilianweng
Lilian Weng
1 year
We are lucky to have you Mira and we are with you 💙.
@miramurati
Mira Murati
1 year
Governance of an institution is critical for oversight, stability, and continuity. I am happy that the independent review has concluded and we can all move forward united. It has been disheartening to witness the previous board’s efforts to scapegoat me with anonymous and.
7
4
129
@lilianweng
Lilian Weng
10 months
Iterative deployment for maximizing AI safety learning needs to be built on top of rigorous science and process. We are learning and improving through each launch.
@OpenAI
OpenAI
10 months
We’re sharing the GPT-4o System Card, an end-to-end safety assessment that outlines what we’ve done to track and address safety challenges, including frontier model risks in accordance with our Preparedness Framework.
12
11
133
@lilianweng
Lilian Weng
4 years
Kinda related, it concerns me when I see a junior researcher (meaning not advising students etc.) has 15 publications on the resume within two years.
2
5
111
@lilianweng
Lilian Weng
7 months
@chrisalbon believe it or not. working on the next piece. time is precious 😿.
3
1
113
@lilianweng
Lilian Weng
2 years
Another topic is on effective negotiation.
8
2
107
@lilianweng
Lilian Weng
6 years
Apparently our AI has a very strong opinion 😂.
@gdb
Greg Brockman
6 years
An OpenAI employee printed out this AI-written sample and posted it by the recycling bin:
Tweet media one
3
10
101
@lilianweng
Lilian Weng
5 years
I couldn’t decide the topic of my next post. Would like to hear your ideas. Plz reply! Thanks <3.
40
5
100
@lilianweng
Lilian Weng
2 years
👩🏼��🔬was broken late last year. It has been fixed now and should run faster than before. Also, the backend has been updated to use our latest and cheaper `text-embedding-ada-002` model.
6
12
100
@lilianweng
Lilian Weng
9 months
We are also extremely proud of how rigorous and thorough our pre-release testing process is, including a full stack of frontier risk evaluations according to our Preparedness Framework and external red teaming. Read more safety work and evaluations in:
5
11
98