lmstudio Profile Banner
LM Studio Profile
LM Studio

@lmstudio

Followers
40K
Following
7K
Media
162
Statuses
2K

Run local LLMs on your computer 👾 We are hiring https://t.co/8R53ZQYcrd

localhost:1234
Joined May 2023
Don't wanna be here? Send us removal request.
@lmstudio
LM Studio
4 days
Qwen3-VL models are now live in LM Studio! 🎉🚀 A powerful collection of vision-language models. Happy Halloween! 🎃👻
23
55
456
@Alibaba_Qwen
Qwen
3 days
🚀The Qwen3-VL models are now live on LM Studio! Happy Halloween! 🎃📷👻
@lmstudio
LM Studio
4 days
Qwen3-VL models are now live in LM Studio! 🎉🚀 A powerful collection of vision-language models. Happy Halloween! 🎃👻
21
47
586
@lmstudio
LM Studio
6 days
We worked with OpenAI to get this model to you launch! Use it with LM Studio's chat UI, SDK, or Responses API compatibility mode. Learn more about this release: https://t.co/bPIi7RCWKm
@OpenAI
OpenAI
6 days
Now in research preview: gpt-oss-safeguard Two open-weight reasoning models built for safety classification. https://t.co/4rZLGhBO1w
0
1
18
@lmstudio
LM Studio
6 days
gpt-oss-safeguard is the first gpt-oss fine-tune from @OpenAI. These models take a developer-provided policy at inference time, reason about new input, and then generate a response. Check the example in the LM Studio model page. https://t.co/Kq3ds9tK46
Tweet card summary image
lmstudio.ai
gpt-oss-safeguard-20b and gpt-oss-safeguard-120b are open safety models from OpenAI, building on gpt-oss. Trained to help classify text content based on customizable policies.
3
12
156
@Forgework_
Forgework
10 days
Very excited to announce that today we have a set of “LM Studio” tiles available on our website! LM Studio is the perfect pair for the Framework Desktop by @FrameworkPuter Out of the box and with you can be running GPT-oss-120b. @lmstudio
7
6
142
@nvidia
NVIDIA
20 days
🎊Today is the day. NVIDIA DGX Sparks are now shipping from us and from our partners. ➡️ https://t.co/Nzmxz1lSHQ We enjoyed delivering some of our first desktop AI supercomputers to researchers, developers, universities, creators, to launch the next chapter in AI:
108
211
1K
@lmstudio
LM Studio
20 days
LM Studio in Apple's M5 announcement today! 😍💥
25
60
961
@Alibaba_Qwen
Qwen
20 days
Run Qwen3-VL on Mac with LM Studio + MLX.
@lmstudio
LM Studio
21 days
The next generation of Qwen-VL models is here! > Qwen3-VL 4B (dense, ~3GB) > Qwen3-VL 8B (dense, ~6GB) > Qwen3-VL 30B (MoE, ~18GB) These models come with comprehensive upgrades to visual perception, spatial reasoning, and image understanding. Supported with 🍎MLX on Mac.
9
36
416
@lmstudio
LM Studio
21 days
LM Studio now ships for NVIDIA's DGX Spark! @nvidia DGX Spark is a tiny but mighty Linux ARM box with 128GB of unified memory. Grace Blackwell architecture. CUDA 13. ✨👾
12
42
367
@lmstudio
LM Studio
21 days
Learn more about Qwen3-VL 4B and 8B:
@Alibaba_Qwen
Qwen
21 days
Introducing the compact, dense versions of Qwen3-VL — now available in 4B and 8B pairs, each with both Instruct and Thinking variants. ✅ Lower VRAM usage ✅ Full Qwen3-VL capabilities retained ✅ Strong performance across the board Despite their size, they outperform models
1
1
21
@lmstudio
LM Studio
21 days
The next generation of Qwen-VL models is here! > Qwen3-VL 4B (dense, ~3GB) > Qwen3-VL 8B (dense, ~6GB) > Qwen3-VL 30B (MoE, ~18GB) These models come with comprehensive upgrades to visual perception, spatial reasoning, and image understanding. Supported with 🍎MLX on Mac.
17
119
929
@lmstudio
LM Studio
21 days
Read our blog post about how to setup the Spark as a private LLM server on your network: https://t.co/WStXp1ovJs
0
2
24
@lmstudio
LM Studio
21 days
LM Studio now ships for NVIDIA's DGX Spark! @nvidia DGX Spark is a tiny but mighty Linux ARM box with 128GB of unified memory. Grace Blackwell architecture. CUDA 13. ✨👾
12
42
367
@gnukeith
Keith
23 days
This is super cute!
8
5
187
@awnihannun
Awni Hannun
24 days
Some data to help decide on what the right precision is for Qwen3 4B (Instruct 2507). I ran the full MMLU Pro eval, plus some efficiency benchmarks with the model at every precision from 4-bit to bf16. TLDR 6-bit is a very decent option at < 1% gap in quality to the full
23
18
213
@NVIDIAAIDev
NVIDIA AI Developer
26 days
Building local AI apps has never been easier thanks to @lmstudio ⚡️ Now, cloud based workflows using @OpenAI's Responses API can easily run on NVIDIA RTX AI PCs. Details 👇
Tweet card summary image
lmstudio.ai
OpenAI-compatible `/v1/responses` endpoint (stateful chats, remote mcp, custom tools)
3
13
91
@lmstudio
LM Studio
27 days
LM Studio 0.3.30 is out now with bug fixes. 🛠️ Fixed tool calling format issue with Qwen3 🌋 Fixed iGPU not utilized in llama.cpp Vulkan 🧑‍💻 'developer' role now supported in /v1/responses
9
26
270
@lmstudio
LM Studio
29 days
Introducing OpenAI Responses API compatibility! /v1/responses on localhost. Supports stateful responses, custom tool use, and setting reasoning level for local LLMs. 👇🧵
35
72
696