We have reached an agreement in principle for Sam Altman to return to OpenAI as CEO with a new initial board of Bret Taylor (Chair), Larry Summers, and Adam D'Angelo.
We are collaborating to figure out the details. Thank you so much for your patience through this.
Use your voice to engage in a back-and-forth conversation with ChatGPT. Speak with it on the go, request a bedtime story, or settle a dinner table debate.
Sound on 🔊
Unveiling GPT-4 -- our large multimodal model that exhibits human-level performance on various professional and academic benchmarks. With iterative alignment and adversarial testing, it's our best-ever model on factuality, steerability, and safety.
Introducing Sora, our text-to-video model.
Sora can create videos of up to 60 seconds featuring highly detailed scenes, complex camera motion, and multiple characters with vibrant emotions.
Prompt: “Beautiful, snowy
Just made ChatGPT available on our API! Our incredible team of builders has delivered 10x cheaper model than our existing GPT-3.5 models, through system-wide optimizations, making it easier to power as many applications as possible.
One year since GPT-4 deployment: From GPT-1 and 2 establishing the language model paradigm, through GPT-3's scaling predictions, to GPT-4 showing how complex systems emerge, mimicking nature’s unpredictable patterns from simple elements. An exploration from observation to deep,
We’re testing ChatGPT Plus, a $20/month subscription plan for faster response and higher reliability during peak demand times. And, the free tier is still here for you
We’re rolling out web browsing and Plugins to all ChatGPT Plus users over the next week! Moving from alpha to beta, they allow ChatGPT to access the internet and to use 70+ third-party plugins.
Governance of an institution is critical for oversight, stability, and continuity. I am happy that the independent review has concluded and we can all move forward united.
It has been disheartening to witness the previous board’s efforts to scapegoat me with anonymous and
ChatGPT can now browse the internet to provide you with current and authoritative information, complete with direct links to sources. It is no longer limited to data before September 2021.
Just introduced initial support for ChatGPT plugins, helping ChatGPT access up-to-date information, run computations, or use third-party services, while prioritizing safety. Rolling out incrementally to limited user and developer base for
We’re clarifying how ChatGPT’s behavior is shaped, our plans for improving it, addressing biases & allowing user customization. We’re also exploring ways to get more public input on decision-making.
I’ve signed this letter alongside many others to emphasize the profound importance of math education. I think a deep understanding of math will help us build elements that will bring AI usefully into the human world.
@elonmusk
and
@sama
may not agree on much of late, but do agree AI is built on strong math foundations, including algebra and calculus, applauding
@UofCalifornia
for recent clarifications on math requirements for admission.
Many industry leaders signed:
We've learned a lot from the ChatGPT research preview and have been making important updates based on user feedback. ChatGPT will be coming to our API and Microsoft's Azure OpenAI Service soon.
Sign up for updates here:
A more intuitive interface for ChatGPT. Just chat with it using your voice or show it what you’re talking about using images. Rolling out over next 2 weeks.
Use your voice to engage in a back-and-forth conversation with ChatGPT. Speak with it on the go, request a bedtime story, or settle a dinner table debate.
Sound on 🔊
Some updates for our devs: new function calling capability in Chat Completions API, new steerable GPT-4 & 3.5 Turbo models, 16k context 3.5 Turbo model and more
I'm incredibly proud of the research the OpenAI team has done to get these new models trained, and the hard engineering to deploy them so that the world can benefit from them!
Our early findings from an initial evaluation of Voice Engine, a model that generates speech closely resembling the source speaker's voice from text input and a 15-second audio sample.
We’re preparing for the 2024 elections by working to prevent AI abuse, increasing transparency about AI-generated content, and improving access to trustworthy voting information.
Memory is now available to all ChatGPT Plus users. Using Memory is easy: just start a new chat and tell ChatGPT anything you’d like it to remember.
Memory can be turned on or off in settings and is not currently available in Europe or Korea. Team, Enterprise, and GPTs to come.
We just launched a new AI classifier trained to detect AI generated text. It’s not perfect but it’s a step forward on distinguishing between AI and human-written text. Lots of work to be done and we're looking for input.
We’re developing a new tool to help distinguish between AI-written and human-written text. We’re releasing an initial version to collect feedback and hope to share improved methods in the future.
Super excited about our new research direction for aligning smarter-than-human AI:
We finetune large models to generalize from weak supervision—using small models instead of humans as weak supervisors.
Check out our new paper:
We’ve seen great results using GPT-4 for content policy development and content moderation, enabling more consistent labeling, a faster feedback loop for policy refinement, and less involvement from human moderators. Built on top of the GPT-4 API:
We've just launched fine-tuning for GPT-3.5 Turbo! Fine-tuning lets you train the model on your company's data and run it at scale. Early tests have shown that fine-tuned GPT-3.5 Turbo can match or exceed GPT-4 on narrow tasks:
Just released ChatGPT Enterprise: enterprise-grade security and privacy, unlimited higher-speed GPT-4 access, longer context windows for processing longer inputs, advanced data analysis capabilities, more customization options
Our first iteration of Preparedness Framework, systematizing safety thinking and mitigating catastrophic risks from increasingly powerful AI models through rigorous capability evaluations, forecasting, and safety measures.
We are systemizing our safety thinking with our Preparedness Framework, a living document (currently in beta) which details the technical and operational investments we are adopting to guide the safety of our frontier model development.
The
@UnlearnAI
team has been working on AI advancements to eliminate trial and error in medicine. Looking forward to collaborating with
@charleskfisher
on increasing the rate of medical breakthroughs with AI.
I'm very excited to welcome
@miramurati
to the board of directors at
@UnlearnAI
🚀
As CTO of OpenAI, Mira has shown that she knows more about building and shipping AI-based products than just about anyone else.
converge 1 was a standout success and is one of our fund's most important initiatives for early AI startups. a small cohort of exceptional founders working closely with our team to build AI native companies.
applications are now open for our second cohort - apply here:
Thanks to
@TheDailyShow
for having me! It was a treat to talk with
@TrevorNoah
about DALL-E and
@OpenAI
’s work on building and deploying safe artificial general intelligence.
We’ll be hosting our first developer conference, OpenAI DevDay, on November 6. Registration to attend in person in San Francisco will open in a few weeks. We’ll also livestream the keynote.
We just announced our Preparedness Team, led by
@aleks_madry
. We’re dealing with tangible threats and real-world challenges, dissecting everything from cybersecurity vulnerabilities to the intricacies of AGI. Consider joining us!
We are building a new Preparedness team to evaluate, forecast, and protect against the risks of highly-capable AI—from today's models to AGI.
Goal: a quantitative, evidence-based methodology, beyond what is accepted as possible:
Also, OpenAI is truly a place where the nuanced and bold discussion on AI safety is happening right now. If you want to be a part of this conversation, share your feedback (see bottom of the page) or better yet, join our team:
(1/3) Alongside Superalignment team, my team is working on the practical side of alignment: Building systems to enable safe AI deployment. We are looking for strong research engineers and scientists to join the efforts.
We just shared the Frontier Model Forum's initiative, the AI Safety Fund. This fund will enable rigorous, independent research, with the goal of thoroughly examining and evaluating the most advanced AI models of our time.
Today, we are announcing Chris Meserole as the Executive Director of the Frontier Model Forum, and the creation of a new AI Safety Fund, a $10 million initiative to promote research in the field of AI safety.
I’ve had the pleasure of working with
@OpenAI
CTO
@miramurati
for a few years now, but we’ve rarely had time to catch up about our backgrounds. Take a listen to our conversation on
#BehindTheTech
where we discuss her innate sense of curiosity, deploying AI tools, and much more!
Little known fact: Many of OpenAI’s key results, including the Dota 2 bot and the pre-training of GPT-4, are thanks to the brilliant Jakub Pachocki
@merettm
Introducing the Instruction Hierarchy, our latest safety research to advance robustness for prompt injections and other ways of tricking LLMs into executing unsafe actions. More details:
Good article on the work that
@charleskfisher
& his team at
@UnlearnAI
are doing, using deep learning to make clinical trials smarter — creating digital twins to predict how diseases progress, which speeds up the whole process and helps more people
We've trained an unsupervised language model that can generate coherent paragraphs and perform rudimentary reading comprehension, machine translation, question answering, and summarization — all without task-specific training:
Today we introduced the Model Spec, starting a public process on shaping desired model behavior — giving stakeholders more agency over steering AI models as they significantly improve in decision making and instruction following capabilities.
Super exciting new research milestone on alignment:
We trained language models to assist human feedback!
Our models help humans find 50% more flaws in summaries than they would have found unassisted.
We've found we can improve AI language model behavior and reduce harmful content by fine-tuning on a small, carefully designed dataset, and we are already incorporating this in our safety efforts.
Shaping model behavior is a nascent science and this is another step in our systematic approach to model safety. We think it’s crucial for people to understand and participate in the debate of choices involved in shaping model behavior.