mattjcly Profile Banner
matt Profile
matt

@mattjcly

Followers
148
Following
154
Media
4
Statuses
44

Systems/ML Software Engineer @ LM Studio

Joined February 2024
Don't wanna be here? Send us removal request.
@mattjcly
matt
2 days
RT @lmstudio: LM Studio is now free for use at work. Starting today, it no longer necessary to get a separate license to use LM Studio for….
0
157
0
@mattjcly
matt
15 days
RT @lmstudio: LM Studio now supports MCP!. Connect your favorite MCP servers to local LLMs, right on your computer.
Tweet media one
0
212
0
@mattjcly
matt
28 days
RT @lmstudio: No cloud, no problem. Run your AI locally.
0
126
0
@mattjcly
matt
1 month
RT @Prince_Canuma: MLX-VLM v0.1.27 is here 🚀. Thanks to @stablequan, @prnc_vrm, @mattjcly from @lmstudio and the folks at @trycua for the a….
0
14
0
@mattjcly
matt
1 month
Special thanks to @awnihannun, @angeloskath, @Prince_Canuma. If you're interested, contributions to are very welcome!.
0
1
6
@mattjcly
matt
1 month
The future is multi modal? 🤔. We've been working to rearchitect @lmstudio's multi modal MLX engine to increase stability and bring previously text-only model features to vision models, like text prompt caching and context processing updates. Check out the technical writeup ↓.
1
5
42
@mattjcly
matt
1 month
RT @lmstudio: LM Studio 0.3.16 is available now!. ✨ What's new in this release ↓
Tweet media one
0
42
0
@mattjcly
matt
2 months
RT @ngxson: Let LM Studio cook 🔥. Ofc cooking with the main ingredients - llama.cpp from @ggml_org 😆😆.
0
2
0
@mattjcly
matt
2 months
h/t to @ngxson and @ggml_org for their incredible work and collaboration to help us enable this.
0
0
5
@mattjcly
matt
2 months
Vision LLM libmtmd-ception: we've adopted llama.cpp's new libmtmd in @lmstudio! . You can now run Pixtral, SmolVLM, InternVL3 and more - 100% locally. Here's Pixtral telling me about @ngxson's viral tweet demoing the new llama.cpp tech 🔥
2
20
105
@mattjcly
matt
2 months
RT @lmstudio: Engine update: LM Studio llama.cpp/1.29.0. - Qwen2.5VL now supported in GGUF (h/t @ngxson) .- Support for @nomic_ai's new MoE….
0
23
0
@mattjcly
matt
2 months
RT @lmstudio: Qwen3 is available on LM Studio in GGUF and MLX!. Sizes:.0.6B, 1.7B, 4B, 8B, 14B, 30B MoE, 32B, and 235B MoE. Happy Qwen day….
0
89
0
@mattjcly
matt
3 months
RT @yagilb: .@mattjcly fixed an mlx-engine bug that caused undesired prompt reprocessing. The difference in performance is huge, and the fi….
0
1
0
@mattjcly
matt
3 months
RT @lmstudio: ✨LM Studio 0.3.15 is out now. - Support for NVIDIA RTX 50-series (CUDA 12.8).- GLM-4 enabled in llama.cpp and MLX.- New syst….
0
61
0
@mattjcly
matt
3 months
Root cause: entangled repetition penalty/cache trimming + trimming bug. Lesson learned: Keep features separate and simple.
0
0
4
@mattjcly
matt
3 months
Prompt caching can be deceptively tricky!. LM Studio MLX engine v0.14.0+ (right) fixes a bug in <=v0.13.2 (left) where follow-ups to long excerpts could unnecessarily force a full prompt re-compute 😅. Fix is out now in LM Studio v0.3.15. Try it out @
2
1
31
@mattjcly
matt
4 months
RT @lmstudio: 🚢 LM Studio 0.3.14 is out now with new powerful controls for Multi-GPU setups!. > Enable/disable specific GPUs.> Choose which….
0
42
0
@mattjcly
matt
5 months
RT @lmstudio: openai/o3-mini-GGUF?. 🥹.👉👈.
0
50
0
@mattjcly
matt
5 months
RT @yagilb: We just released LM Studio 0.3.10 with Speculative Decoding support!. It's a technique that pairs together a large model, and a….
0
32
0
@mattjcly
matt
5 months
RT @lmstudio: LM Studio 0.3.10 is here with 🔮 Speculative Decoding!. This provides inferencing speedups, in some cases 2x or more, with no….
0
79
0