Mira Murati
@miramurati
Followers
384K
Following
2K
Media
6
Statuses
327
Now building @thinkymachines. Previously CTO @OpenAI
San Francisco, CA
Joined August 2010
Consider joining us
job-boards.greenhouse.io
we are opening a bunch of new roles at @thinkymachines this week. research roles are live, more to come 👇
83
94
2K
Roadmap update: Tinker launched into private beta a month ago, and we've seen hundreds of builders and researchers train and experiment with models on our platform. This month we've added new models, expanded the cookbook, and improved overall capacity and performance.
We just added 4 new models to Tinker from the gpt-oss and DeepSeek-V3.1 families. Sign up for the waitlist: https://t.co/CAsOcUduwR
115
45
515
As part of our commitment to open and collaborative science, we’re expanding free access to Tinker for researchers and instructors with our grants program.
Today we’re announcing research and teaching grants for Tinker: credits for scholars and students to fine-tune and experiment with open-weight LLMs. Read more and apply at:
134
144
2K
Combining the benefits of RL and SFT with on-policy distillation, a promising approach for training small models for domain performance and continual learning.
Our latest post explores on-policy distillation, a training approach that unites the error-correcting relevance of RL with the reward density of SFT. When training it for math reasoning and as an internal chat assistant, we find that on-policy distillation can outperform other
102
226
3K
We're happy to support the Human Centered LLMs course, on topics close to our hearts. We'd like to support more classes with free credits for students to use on assignments and projects. If you're an instructor interested in using Tinker in your course, please reach out to
16
60
641
Tinker is cool. If you're a researcher/developer, tinker dramatically simplifies LLM post-training. You retain 90% of algorithmic creative control (usually related to data, loss function, the algorithm) while tinker handles the hard parts that you usually want to touch much less
Introducing Tinker: a flexible API for fine-tuning language models. Write training loops in Python on your laptop; we'll run them on distributed GPUs. Private beta starts today. We can't wait to see what researchers and developers build with cutting-edge open models!
110
651
6K
Very excited to see the Tinker release! @pcmoritz and I had a chance to experiment with the API. It does a nice job of providing flexibility while abstracting away GPU handling. Here's a simple example showing how to generate synthetic data and fine tune a text to SQL model.
anyscale.com
Powered by Ray, Anyscale empowers AI builders to run and scale all ML and AI workloads on any cloud and on-prem.
Introducing Tinker: a flexible API for fine-tuning language models. Write training loops in Python on your laptop; we'll run them on distributed GPUs. Private beta starts today. We can't wait to see what researchers and developers build with cutting-edge open models!
9
37
262
Today we launched Tinker. Tinker brings frontier tools to researchers, offering clean abstractions for writing experiments and training pipelines while handling distributed training complexity. It enables novel research, custom models, and solid baselines. Excited to see what
Introducing Tinker: a flexible API for fine-tuning language models. Write training loops in Python on your laptop; we'll run them on distributed GPUs. Private beta starts today. We can't wait to see what researchers and developers build with cutting-edge open models!
190
522
5K
Today on Connectionism: establishing the conditions under which LoRA matches full fine-tuning performance, with new experimental results and a grounding in information theory
LoRA makes fine-tuning more accessible, but it's unclear how it compares to full fine-tuning. We find that the performance often matches closely---more often than you might expect. In our latest Connectionism post, we share our experimental results and recommendations for LoRA.
57
145
2K
Sharing our second Connectionism research post on Modular Manifolds, a mathematical approach to refining training at each layer of the neural network
Efficient training of neural networks is difficult. Our second Connectionism post introduces Modular Manifolds, a theoretical step toward more stable and performant training by co-designing neural net optimizers with manifold constraints on weight matrices.
87
262
3K
At Thinking Machines, our work includes collaborating with the broader research community. Today we are excited to share that we are building a vLLM team at @thinkymachines to advance open-source vLLM and serve frontier models. If you are interested, please DM me or @barret_zoph!
40
81
1K
A big part of our mission at Thinking Machines is to improve people’s scientific understanding of AI and work with the broader research community. Introducing Connectionism today to share some of our scientific insights.
Today Thinking Machines Lab is launching our research blog, Connectionism. Our first blog post is “Defeating Nondeterminism in LLM Inference” We believe that science is better when shared. Connectionism will cover topics as varied as our research is: from kernel numerics to
182
411
5K
Thinking Machines Lab exists to empower humanity through advancing collaborative general intelligence. We're building multimodal AI that works with how you naturally interact with the world - through conversation, through sight, through the messy way we collaborate. We're
648
695
8K
If you’d like to be part of a team making huge ambitious bets on multimodality among other things & work with Rowan, we’re hiring!
life update: I've joined @thinkymachines lab! We're building the future of human-AI interaction through open science, research+product co-iteration, and with multimodal at the core. If you're interested in joining our fantastic team - reach out! DMs open 😀
161
136
3K
Follow us @thinkymachines for more updates over the coming weeks
Today, we are excited to announce Thinking Machines Lab ( https://t.co/gD5QlPMfWw), an artificial intelligence research and product company. We are scientists, engineers, and builders behind some of the most widely used AI products and libraries, including ChatGPT,
125
112
2K
If you’re interested in joining our team, consider applying here
paperform.co
Create forms and surveys, take payments, automate workflows and send documents for signing, all from one easy, doc‑style form builder FOR FREE
62
76
944
I started Thinking Machines Lab alongside a remarkable team of scientists, engineers, and builders. We're building three things: - Helping people adapt AI systems to work for their specific needs - Developing strong foundations to build more capable AI systems - Fostering a
thinkingmachines.ai
Connectionism: Research Blog by Thinking Machines Lab
686
906
9K
All Plus and Team users in ChatGPT
Advanced Voice is rolling out to all Plus and Team users in the ChatGPT app over the course of the week. While you’ve been patiently waiting, we’ve added Custom Instructions, Memory, five new voices, and improved accents. It can also say “Sorry I’m late” in over 50 languages.
207
219
4K
The Safety and Security Committee—a committee established to review critical safety and security issues—has made recommendations across five key areas, which we are adopting.
openai.com
An update on our safety & security practices
47
78
633