lmstudio Profile Banner
LM Studio Profile
LM Studio

@lmstudio

Followers
40K
Following
7K
Media
163
Statuses
2K

Run local LLMs on your computer 👾 We are hiring https://t.co/8R53ZQYcrd

localhost:1234
Joined May 2023
Don't wanna be here? Send us removal request.
@awnihannun
Awni Hannun
5 days
Local AI, measured by intelligence-per-watt (IPW), improved 5.3× in just 2 years! And it feels like the slope is still quite steep. Imagine another 5-10x in the next 2 years. Exciting.
@JonSaadFalcon
Jon Saad-Falcon
5 days
Data centers dominate AI, but they're hitting physical limits. What if the future of AI isn't just bigger data centers, but local intelligence in our hands? The viability of local AI depends on intelligence efficiency. To measure this, we propose intelligence per watt (IPW):
9
11
130
@lmstudio
LM Studio
13 days
LM Studio 0.3.31 has shipped! What's new: 🏞️ OCR, VLM performance improvements 🛠️ MiniMax-M2 tool calling support ⚡️ Flash Attention on by default for CUDA 🚂 New CLI command: `lms runtime` See it in action 👇
16
42
423
@Alibaba_Qwen
Qwen
16 days
🚀The Qwen3-VL models are now live on LM Studio! Happy Halloween! 🎃📷👻
@lmstudio
LM Studio
17 days
Qwen3-VL models are now live in LM Studio! 🎉🚀 A powerful collection of vision-language models. Happy Halloween! 🎃👻
21
47
596
@lmstudio
LM Studio
17 days
Qwen3-VL models are now live in LM Studio! 🎉🚀 A powerful collection of vision-language models. Happy Halloween! 🎃👻
24
55
457
@lmstudio
LM Studio
19 days
We worked with OpenAI to get this model to you launch! Use it with LM Studio's chat UI, SDK, or Responses API compatibility mode. Learn more about this release: https://t.co/bPIi7RCWKm
@OpenAI
OpenAI
20 days
Now in research preview: gpt-oss-safeguard Two open-weight reasoning models built for safety classification. https://t.co/4rZLGhBO1w
0
1
18
@lmstudio
LM Studio
19 days
gpt-oss-safeguard is the first gpt-oss fine-tune from @OpenAI. These models take a developer-provided policy at inference time, reason about new input, and then generate a response. Check the example in the LM Studio model page. https://t.co/Kq3ds9tK46
Tweet card summary image
lmstudio.ai
gpt-oss-safeguard-20b and gpt-oss-safeguard-120b are open safety models from OpenAI, building on gpt-oss. Trained to help classify text content based on customizable policies.
3
12
157
@Forgework_
Forgework
23 days
Very excited to announce that today we have a set of “LM Studio” tiles available on our website! LM Studio is the perfect pair for the Framework Desktop by @FrameworkPuter Out of the box and with you can be running GPT-oss-120b. @lmstudio
7
6
143
@nvidia
NVIDIA
1 month
🎊Today is the day. NVIDIA DGX Sparks are now shipping from us and from our partners. ➡️ https://t.co/Nzmxz1lSHQ We enjoyed delivering some of our first desktop AI supercomputers to researchers, developers, universities, creators, to launch the next chapter in AI:
111
211
1K
@lmstudio
LM Studio
1 month
LM Studio in Apple's M5 announcement today! 😍💥
25
59
963
@Alibaba_Qwen
Qwen
1 month
Run Qwen3-VL on Mac with LM Studio + MLX.
@lmstudio
LM Studio
1 month
The next generation of Qwen-VL models is here! > Qwen3-VL 4B (dense, ~3GB) > Qwen3-VL 8B (dense, ~6GB) > Qwen3-VL 30B (MoE, ~18GB) These models come with comprehensive upgrades to visual perception, spatial reasoning, and image understanding. Supported with 🍎MLX on Mac.
9
36
420
@lmstudio
LM Studio
1 month
LM Studio now ships for NVIDIA's DGX Spark! @nvidia DGX Spark is a tiny but mighty Linux ARM box with 128GB of unified memory. Grace Blackwell architecture. CUDA 13. ✨👾
12
42
368
@lmstudio
LM Studio
1 month
Learn more about Qwen3-VL 4B and 8B:
@Alibaba_Qwen
Qwen
1 month
Introducing the compact, dense versions of Qwen3-VL — now available in 4B and 8B pairs, each with both Instruct and Thinking variants. ✅ Lower VRAM usage ✅ Full Qwen3-VL capabilities retained ✅ Strong performance across the board Despite their size, they outperform models
1
1
22
@lmstudio
LM Studio
1 month
The next generation of Qwen-VL models is here! > Qwen3-VL 4B (dense, ~3GB) > Qwen3-VL 8B (dense, ~6GB) > Qwen3-VL 30B (MoE, ~18GB) These models come with comprehensive upgrades to visual perception, spatial reasoning, and image understanding. Supported with 🍎MLX on Mac.
17
118
930
@lmstudio
LM Studio
1 month
Read our blog post about how to setup the Spark as a private LLM server on your network: https://t.co/WStXp1ovJs
0
2
24
@lmstudio
LM Studio
1 month
LM Studio now ships for NVIDIA's DGX Spark! @nvidia DGX Spark is a tiny but mighty Linux ARM box with 128GB of unified memory. Grace Blackwell architecture. CUDA 13. ✨👾
12
42
368
@gnukeith
Keith
1 month
This is super cute!
8
5
184
@awnihannun
Awni Hannun
1 month
Some data to help decide on what the right precision is for Qwen3 4B (Instruct 2507). I ran the full MMLU Pro eval, plus some efficiency benchmarks with the model at every precision from 4-bit to bf16. TLDR 6-bit is a very decent option at < 1% gap in quality to the full
23
18
213