LM Studio
@lmstudio
Followers
40K
Following
7K
Media
162
Statuses
2K
Run local LLMs on your computer 👾 We are hiring https://t.co/8R53ZQYcrd
localhost:1234
Joined May 2023
Qwen3-VL models are now live in LM Studio! 🎉🚀 A powerful collection of vision-language models. Happy Halloween! 🎃👻
23
55
456
Update your llama.cpp engine first! Run in the terminal: $ lms runtime update llama.cpp https://t.co/tmZogkI49x
lmstudio.ai
Qwen's latest vision-language model. Includes comprehensive upgrades to visual perception, spatial reasoning, and image understanding.
1
1
35
We worked with OpenAI to get this model to you launch! Use it with LM Studio's chat UI, SDK, or Responses API compatibility mode. Learn more about this release: https://t.co/bPIi7RCWKm
Now in research preview: gpt-oss-safeguard Two open-weight reasoning models built for safety classification. https://t.co/4rZLGhBO1w
0
1
18
gpt-oss-safeguard is the first gpt-oss fine-tune from @OpenAI. These models take a developer-provided policy at inference time, reason about new input, and then generate a response. Check the example in the LM Studio model page. https://t.co/Kq3ds9tK46
lmstudio.ai
gpt-oss-safeguard-20b and gpt-oss-safeguard-120b are open safety models from OpenAI, building on gpt-oss. Trained to help classify text content based on customizable policies.
3
12
156
Very excited to announce that today we have a set of “LM Studio” tiles available on our website! LM Studio is the perfect pair for the Framework Desktop by @FrameworkPuter Out of the box and with you can be running GPT-oss-120b. @lmstudio
7
6
142
🎊Today is the day. NVIDIA DGX Sparks are now shipping from us and from our partners. ➡️ https://t.co/Nzmxz1lSHQ We enjoyed delivering some of our first desktop AI supercomputers to researchers, developers, universities, creators, to launch the next chapter in AI:
108
211
1K
Run Qwen3-VL on Mac with LM Studio + MLX.
The next generation of Qwen-VL models is here! > Qwen3-VL 4B (dense, ~3GB) > Qwen3-VL 8B (dense, ~6GB) > Qwen3-VL 30B (MoE, ~18GB) These models come with comprehensive upgrades to visual perception, spatial reasoning, and image understanding. Supported with 🍎MLX on Mac.
9
36
416
Learn more about Qwen3-VL 4B and 8B:
Introducing the compact, dense versions of Qwen3-VL — now available in 4B and 8B pairs, each with both Instruct and Thinking variants. ✅ Lower VRAM usage ✅ Full Qwen3-VL capabilities retained ✅ Strong performance across the board Despite their size, they outperform models
1
1
21
The next generation of Qwen-VL models is here! > Qwen3-VL 4B (dense, ~3GB) > Qwen3-VL 8B (dense, ~6GB) > Qwen3-VL 30B (MoE, ~18GB) These models come with comprehensive upgrades to visual perception, spatial reasoning, and image understanding. Supported with 🍎MLX on Mac.
17
119
929
Read our blog post about how to setup the Spark as a private LLM server on your network: https://t.co/WStXp1ovJs
0
2
24
Some data to help decide on what the right precision is for Qwen3 4B (Instruct 2507). I ran the full MMLU Pro eval, plus some efficiency benchmarks with the model at every precision from 4-bit to bf16. TLDR 6-bit is a very decent option at < 1% gap in quality to the full
23
18
213
Building local AI apps has never been easier thanks to @lmstudio ⚡️ Now, cloud based workflows using @OpenAI's Responses API can easily run on NVIDIA RTX AI PCs. Details 👇
lmstudio.ai
OpenAI-compatible `/v1/responses` endpoint (stateful chats, remote mcp, custom tools)
3
13
91
LM Studio 0.3.30 is out now with bug fixes. 🛠️ Fixed tool calling format issue with Qwen3 🌋 Fixed iGPU not utilized in llama.cpp Vulkan 🧑💻 'developer' role now supported in /v1/responses
9
26
270
Introducing OpenAI Responses API compatibility! /v1/responses on localhost. Supports stateful responses, custom tool use, and setting reasoning level for local LLMs. 👇🧵
35
72
696