Thinking Machines Profile
Thinking Machines

@thinkymachines

Followers
102K
Following
0
Media
7
Statuses
23

Thinking, beeping, and booping.

Joined February 2025
Don't wanna be here? Send us removal request.
@thinkymachines
Thinking Machines
1 month
Introducing Tinker: a flexible API for fine-tuning language models. Write training loops in Python on your laptop; we'll run them on distributed GPUs. Private beta starts today. We can't wait to see what researchers and developers build with cutting-edge open models!
225
791
6K
@thinkymachines
Thinking Machines
11 hours
Science is best shared! Tell us about what you’ve built or discovered with Tinker, so we can tell the world about it on our blog. More details at
Tweet card summary image
thinkingmachines.ai
Announcing Tinker Community Projects
16
24
207
@thinkymachines
Thinking Machines
9 days
In addition to expanding capacity, we are adding new models to our lineup and working on image support and production inference. We are excited to see what you build with Tinker!
3
0
81
@thinkymachines
Thinking Machines
9 days
Starting Monday, November 3rd, Tinker is switching to a pricing plan that reflects compute usage. This will ensure we have sufficient capacity to clear our waitlist by the end of the year, allowing anyone to sign up and start Tinkering. https://t.co/RGEEBj4VVo
6
9
335
@thinkymachines
Thinking Machines
9 days
Roadmap update: Tinker launched into private beta a month ago, and we've seen hundreds of builders and researchers train and experiment with models on our platform. This month we've added new models, expanded the cookbook, and improved overall capacity and performance.
@thinkymachines
Thinking Machines
11 days
We just added 4 new models to Tinker from the gpt-oss and DeepSeek-V3.1 families. Sign up for the waitlist: https://t.co/CAsOcUduwR
15
35
454
@thinkymachines
Thinking Machines
10 days
Today we’re announcing research and teaching grants for Tinker: credits for scholars and students to fine-tune and experiment with open-weight LLMs. Read more and apply at:
18
117
979
@thinkymachines
Thinking Machines
11 days
We just added 4 new models to Tinker from the gpt-oss and DeepSeek-V3.1 families. Sign up for the waitlist: https://t.co/CAsOcUduwR
20
37
547
@thinkymachines
Thinking Machines
12 days
Our latest post explores on-policy distillation, a training approach that unites the error-correcting relevance of RL with the reward density of SFT. When training it for math reasoning and as an internal chat assistant, we find that on-policy distillation can outperform other
62
392
3K
@karpathy
Andrej Karpathy
1 month
Tinker is cool. If you're a researcher/developer, tinker dramatically simplifies LLM post-training. You retain 90% of algorithmic creative control (usually related to data, loss function, the algorithm) while tinker handles the hard parts that you usually want to touch much less
@thinkymachines
Thinking Machines
1 month
Introducing Tinker: a flexible API for fine-tuning language models. Write training loops in Python on your laptop; we'll run them on distributed GPUs. Private beta starts today. We can't wait to see what researchers and developers build with cutting-edge open models!
112
656
6K
@pcmoritz
Philipp Moritz
1 month
Very excited to see the Tinker release by @thinkymachines! @robertnishihara and I had a chance to experiment with the API, see https://t.co/tHnfufTTxt. It does a nice job of providing flexibility while abstracting away GPU handling. This will be 🔥 when combined with
2
23
154
@robertnishihara
Robert Nishihara
1 month
Very excited to see the Tinker release! @pcmoritz and I had a chance to experiment with the API. It does a nice job of providing flexibility while abstracting away GPU handling. Here's a simple example showing how to generate synthetic data and fine tune a text to SQL model.
Tweet card summary image
anyscale.com
Powered by Ray, Anyscale empowers AI builders to run and scale all ML and AI workloads on any cloud and on-prem.
@thinkymachines
Thinking Machines
1 month
Introducing Tinker: a flexible API for fine-tuning language models. Write training loops in Python on your laptop; we'll run them on distributed GPUs. Private beta starts today. We can't wait to see what researchers and developers build with cutting-edge open models!
8
36
259
@s_ibarraran
Sebastian Ibarraran
1 month
While reinforcement learning has been demonstrated to improve LLM performance on mathematical reasoning tasks, currently, there is far less evidence of performant scientific reasoning models. Using Tinker by @thinkymachines, we were able to rapidly train a variety of models on -
5
19
192
@chijinML
Chi Jin
1 month
🚀With early access to Tinker, we matched full-parameter SFT performance as in Goedel-Prover V2 (32B) (on the same 20% data) using LoRA + 20% of the data. 📊MiniF2F Pass@32 ≈ 81 (20% SFT). Next: full-scale training + RL. This is something that previously took a lot more effort
Tweet card summary image
thinkingmachines.ai
How LoRA matches full training performance more broadly than expected.
2
20
186
@__Charlie_G
Charlie George
1 month
1/ How do you verify complex AI outputs at scale without expert-labelled data? Working with @thinkymachines' new RL API Tinker, I've been expanding on some previous work I shared around using unstructured internet data to train models to grade IMO / USAMO solutions.
2
8
70
@ejcgan
Eric Gan
1 month
I've been using Tinker at Redwood Research to RL-train long-context models like Qwen3-32B on difficult AI control tasks - specifically teaching models to write unsuspicious backdoors in code similar to the AI control paper. Early stages but seeing some interesting backdoors 👀
2
3
44
@tyler_griggs_
Tyler Griggs
1 month
I had the chance to try @thinkymachines' Tinker API for the past couple weeks. Some early impressions: Very hackable & lifts a lot of the LLM training burden, a great fit for researchers who want to focus on algs + data, not infra. My research is in RL, and many RL fine-tuning
@thinkymachines
Thinking Machines
1 month
Introducing Tinker: a flexible API for fine-tuning language models. Write training loops in Python on your laptop; we'll run them on distributed GPUs. Private beta starts today. We can't wait to see what researchers and developers build with cutting-edge open models!
10
27
496
@thinkymachines
Thinking Machines
1 month
Tinker advances our mission of enabling more people to do research on cutting-edge models and customize them to their needs. https://t.co/raQkVTiUNB
4
20
355
@thinkymachines
Thinking Machines
1 month
LoRA makes fine-tuning more accessible, but it's unclear how it compares to full fine-tuning. We find that the performance often matches closely---more often than you might expect. In our latest Connectionism post, we share our experimental results and recommendations for LoRA.
80
562
3K
@thinkymachines
Thinking Machines
1 month
Efficient training of neural networks is difficult. Our second Connectionism post introduces Modular Manifolds, a theoretical step toward more stable and performant training by co-designing neural net optimizers with manifold constraints on weight matrices.
118
461
3K
@thinkymachines
Thinking Machines
2 months
Today Thinking Machines Lab is launching our research blog, Connectionism. Our first blog post is “Defeating Nondeterminism in LLM Inference” We believe that science is better when shared. Connectionism will cover topics as varied as our research is: from kernel numerics to
237
1K
8K