there's (exactly) seven ways to optimize latency in an llm application – just published a guide in the
@openai
docs that covers them!
go check it out! (and let me know what you think)
- link 👇-
I deeply regret my participation in the board's actions. I never intended to harm OpenAI. I love everything we've built together and I will do everything I can to reunite the company.
We have reached an agreement in principle for Sam Altman to return to OpenAI as CEO with a new initial board of Bret Taylor (Chair), Larry Summers, and Adam D'Angelo.
We are collaborating to figure out the details. Thank you so much for your patience through this.
just published this quickstart –
@openai
Assistants API +
@nextjs
, streaming, function calling, code interpreter, and the new file search!
please steal the code and build something cool ;)
We’ve open sourced a new quickstart to help you build with the Assistants API and
@nextjs
.
It comes with sample code for creating a chat interface with streaming, and using tools like function calling, code interpreter, and the new file search.
Published my first cookbook going over the Assistants API (from features to gotchas), check it out!
Also huge kudos to
@simonpfish
for making this website so god damn clean
A note to
@OpenAI
developers 🫶:
I wanted to express my appreciation for all the warm, thoughtful, and supportive messages I got and I’ve seen posted across the community.
Despite a moment of uncertainty, our commitment to developers remained steadfast.
In the meantime,…
@jerryjliu0
Totally agree on a lot of these points, especially 4️⃣! Fine-tune on a knowledge base, then use the more accurate “hallucinations” for HyDE.
Sort of like how studying a topic will make you better at looking up answers even if you don’t memorize anything.
ft + RAG = 🧠🧠🧠
ShellAI now supports OSS models 🚀
it's my minimal terminal assistant I use literally every day 🔥 - made it super easy to set up any model (hosted & local) with custom prompting
(also added config file auto-backups to make fucking around with it stress-free 🤙🏽) try it out!
In case you ever wanted an AI that is sarcastic while it helps you, I made Jarvis.
Just as helpful as ChatGPT, but consistently gets a good chuckle out of me.
Took a fair bit of prompting... let me know what you think!
that being said, it’s incredibly frustrating that it had to get this far for Ilya to understand the consequences of his moves. they were painfully obvious to everyone, and it kills me to think this entire thing could have been avoided with the most basic realistic forethought
what has become very clear, though, is just how strong the unity of our team is. I am so deeply moved by the camaraderie among this team, and I’m very genuinely inspired by our leadership. I would follow our team anywhere.
alrighty
@willdepue
I guess this answers your question for me “if you were the software in the robot would you be able to control it” … so I guess yeah I might agree now – if we put agi in a robot I think we solve robotics too
Introduce 𝐌𝐨𝐛𝐢𝐥𝐞 𝐀𝐋𝐎𝐇𝐀🏄 -- Learning!
With 50 demos, our robot can autonomously complete complex mobile manipulation tasks:
- cook and serve shrimp🦐
- call and take elevator🛗
- store a 3Ibs pot to a two-door cabinet
Open-sourced!
Co-led
@tonyzzhao
,
@chelseabfinn
Because the earth’s crust is recycled every few hundred million years, it’s technically possible for a post-industrial civilization to have existed on earth longer than we have, and have left little to no trace for us to find today
We've just launched fine-tuning for GPT-3.5 Turbo! Fine-tuning lets you train the model on your company's data and run it at scale. Early tests have shown that fine-tuned GPT-3.5 Turbo can match or exceed GPT-4 on narrow tasks:
Saw some fascinating talks at
@aiDotEngineer
summit!
- 🚶🏽♂️ latent space walks and reversing OpenAI embeddings with an encoder/decoder + linear adapter
@thesephist
- 🗺️ ai-powered interfaces that adapt to granularity, like details on a map adapt as we zoom
@Wattenberger
- 🧱…
This is why I love working at OpenAI – start building a good idea and everyone will help you make it a reality 🚀 HUGE shoutout
@slessans
and
@karoliskosas
for taking this project all the way across the finish line!
Good news, the
@OpenAI
fine-tuning UI now supports end to end job creation all in the UI, no code required to kick off a job! 🤯
Democratizing access to fine-tuning the worlds most advanced models is a huge win.
Congrats to
@slessans
on the ship! 🎉
@nsthorat
@modeless
We used
@phabricator
at Twitter a few years ago (maybe a fork?) and that felt pretty close to critique… though now I see they shut down 😅
curious about Gerritt, but *really* not sure why GitHub can’t make their diffs better
The
@OpenAI
fine-tuning UI is here! 🔥
You can now see your fine-tunes directly and will be able to create them though the UI in the months to come!
We also bumped the concurrent training limit from 1 to 3 so you can fine-tune more models!
At our Seattle event this week (Thursday 4/25):
@ilanbigio
and Anu Trivedi will talk about optimizing AI apps based on their experiences at
@OpenAI
and
@Flipkart
Learn how to harness the power of
#LLMs
for your own projects at LLMs Beyond the Lab:
makes me unreasonably happy that we’re getting LLMs to reach their full potential through… ✨developmental psychology✨ (“violation of expectation” to predict mental state)
up next: therapy for LLMs, and I’m so here for it
@holdenmatt
Curious what your thoughts are on functions for structured output vs prompting for JSON? We’re still exploring with this so feedback is welcome!
still remember my beefed-up 2015 MacBook Pro struggling to run a 100M gpt-2 in 2019 and now we will rip 100x that in a watch…
really cool paper from apple. different kind of pocket god?
Apple announces LLM in a flash: Efficient Large Language Model Inference with Limited Memory
paper page:
Large language models (LLMs) are central to modern natural language processing, delivering exceptional performance in various tasks. However, their…
The fact that we can casually talk about turning a crater into a telescope *on the moon* by shooting 12 grappling hooks from a tiny lander is fucking nuts and I’m excited
@brotzky_
Yeah plug-and-play retrieval is so nice. Polling does suck rn but streaming is coming soon! Not sure there's any way around that last one though?
when a video/post sparks motivation enough to break me out of a doomscroll, I save it
curious if anyone has other content like this?
(sharing some below)
@mattshumer_
Oh interesting… do you have examples of things that were easier with the completion-only playground?
(I was personally so happy moving from completions to chat – really curious what we can do to make this better!)
Excited to share
@GoogleDeepMind
’s newest AI model GraphCast: the most accurate 10-day global weather forecasting system in the world. GraphCast can also offer earlier warnings of extreme weather events, including the path of hurricanes. In
@Science
today
@willdepue
eventually got spooked by the vaguely threatening migration emails and bit the bullet – had to reset at least three passwords/old emails to do it but man was it worth it. those bastards.
@jxnlco
@itsandrewgao
Plus (even light) strength training. Felt rsi coming on, realized hadn’t been to the gym in a while, started back up and it was gone after two weeks. Mileage may vary, but cannot recommend this enough ^^
Brief history: back in the days of Codex beta I built a little bash wrapper to help me write shell commands, which turned into a
@deno_land
client/server to multiplex my key access, which turned into a proper open source Go project focused on being simple and pretty because ✨🤷🏽♂️
@TodePond
@simonw
@sawyerhood
@steveruizok
i’m really impressed by how well you got gpt-v to one-shot this… did you re-run a set of wireframes as you tweaked it? or was it vibes based off of one example at a time
(for context i have tried setting x-frame-options to ALLOW-FROM [url] and Content-Security-Policy to frame-ancestors [urls] both in next.config.js and middleware.ts, and every possible permutation of these settings) and nothing I do changes the 401 x-frames-options DENY returned