LM Studio
@lmstudio
Followers
40K
Following
7K
Media
163
Statuses
2K
Run local LLMs on your computer 👾 We are hiring https://t.co/8R53ZQYcrd
localhost:1234
Joined May 2023
Local AI, measured by intelligence-per-watt (IPW), improved 5.3× in just 2 years! And it feels like the slope is still quite steep. Imagine another 5-10x in the next 2 years. Exciting.
Data centers dominate AI, but they're hitting physical limits. What if the future of AI isn't just bigger data centers, but local intelligence in our hands? The viability of local AI depends on intelligence efficiency. To measure this, we propose intelligence per watt (IPW):
9
11
130
LM Studio 0.3.31 has shipped! What's new: 🏞️ OCR, VLM performance improvements 🛠️ MiniMax-M2 tool calling support ⚡️ Flash Attention on by default for CUDA 🚂 New CLI command: `lms runtime` See it in action 👇
16
42
423
Update your llama.cpp engine first! Run in the terminal: $ lms runtime update llama.cpp https://t.co/tmZogkI49x
lmstudio.ai
Qwen's latest vision-language model. Includes comprehensive upgrades to visual perception, spatial reasoning, and image understanding.
1
1
38
Qwen3-VL models are now live in LM Studio! 🎉🚀 A powerful collection of vision-language models. Happy Halloween! 🎃👻
24
55
457
We worked with OpenAI to get this model to you launch! Use it with LM Studio's chat UI, SDK, or Responses API compatibility mode. Learn more about this release: https://t.co/bPIi7RCWKm
Now in research preview: gpt-oss-safeguard Two open-weight reasoning models built for safety classification. https://t.co/4rZLGhBO1w
0
1
18
gpt-oss-safeguard is the first gpt-oss fine-tune from @OpenAI. These models take a developer-provided policy at inference time, reason about new input, and then generate a response. Check the example in the LM Studio model page. https://t.co/Kq3ds9tK46
lmstudio.ai
gpt-oss-safeguard-20b and gpt-oss-safeguard-120b are open safety models from OpenAI, building on gpt-oss. Trained to help classify text content based on customizable policies.
3
12
157
Very excited to announce that today we have a set of “LM Studio” tiles available on our website! LM Studio is the perfect pair for the Framework Desktop by @FrameworkPuter Out of the box and with you can be running GPT-oss-120b. @lmstudio
7
6
143
🎊Today is the day. NVIDIA DGX Sparks are now shipping from us and from our partners. ➡️ https://t.co/Nzmxz1lSHQ We enjoyed delivering some of our first desktop AI supercomputers to researchers, developers, universities, creators, to launch the next chapter in AI:
111
211
1K
Run Qwen3-VL on Mac with LM Studio + MLX.
The next generation of Qwen-VL models is here! > Qwen3-VL 4B (dense, ~3GB) > Qwen3-VL 8B (dense, ~6GB) > Qwen3-VL 30B (MoE, ~18GB) These models come with comprehensive upgrades to visual perception, spatial reasoning, and image understanding. Supported with 🍎MLX on Mac.
9
36
420
Learn more about Qwen3-VL 4B and 8B:
Introducing the compact, dense versions of Qwen3-VL — now available in 4B and 8B pairs, each with both Instruct and Thinking variants. ✅ Lower VRAM usage ✅ Full Qwen3-VL capabilities retained ✅ Strong performance across the board Despite their size, they outperform models
1
1
22
The next generation of Qwen-VL models is here! > Qwen3-VL 4B (dense, ~3GB) > Qwen3-VL 8B (dense, ~6GB) > Qwen3-VL 30B (MoE, ~18GB) These models come with comprehensive upgrades to visual perception, spatial reasoning, and image understanding. Supported with 🍎MLX on Mac.
17
118
930
Read our blog post about how to setup the Spark as a private LLM server on your network: https://t.co/WStXp1ovJs
0
2
24
Some data to help decide on what the right precision is for Qwen3 4B (Instruct 2507). I ran the full MMLU Pro eval, plus some efficiency benchmarks with the model at every precision from 4-bit to bf16. TLDR 6-bit is a very decent option at < 1% gap in quality to the full
23
18
213