Vincent Amato
@vincentaamato
Followers
253
Following
1K
Media
7
Statuses
111
CS @BrownUniversity
United States
Joined May 2024
Mushroom AI now has a web app that uses the same underlying model as the iOS app - built with @sveltejs ! Give it a try:
0
0
0
If you want to use DINOv3 in your own MLX Swift applications, check out: https://t.co/Z84Ns1oJCR
0
0
0
Mushroom AI now has a web app that uses the same underlying model as the iOS app - built with @sveltejs ! Give it a try:
I just released my first app on the App Store! I made a mushroom classifier with DINOv3 and ported it to MLX Swift, meaning the model runs 100% locally. The classifier achieves impressive accuracy on mushrooms included in the dataset. This app is completely free and I have no
0
0
1
If you want to use DINOv3 in your own MLX Swift applications, check out: https://t.co/Z84Ns1oJCR
github.com
A native Swift implementation of Meta’s DINOv3 using MLX Swift. - vincentamato/MLXDINOv3
I just released my first app on the App Store! I made a mushroom classifier with DINOv3 and ported it to MLX Swift, meaning the model runs 100% locally. The classifier achieves impressive accuracy on mushrooms included in the dataset. This app is completely free and I have no
0
0
0
If you get a really bad identification, the mushroom may not have been included in the training data. Please report it here: https://t.co/mPYqyPCgZ1
0
0
1
As it says after every identification, NEVER rely solely on app results for mushroom identification. Always consult an expert. This app is purely for educational and informative purposes.
1
0
0
I just released my first app on the App Store! I made a mushroom classifier with DINOv3 and ported it to MLX Swift, meaning the model runs 100% locally. The classifier achieves impressive accuracy on mushrooms included in the dataset. This app is completely free and I have no
6
2
27
Latest mlx-lm is up: pip install -U mlx-lm - New models: LFM2 MoE, Nanochat, Jamba, Qwen3 VL (text-only) - Memory efficient prefill for SSMs - Distributed evals - And more fixes / qol improvements.
4
23
164
It has begun 🚀
Haven't spoken about Marvis-TTS in a minute! We got some awesome news coming soon. Some surprising results, this might need a paper. Let us cook with the @PrimeIntellect stove 🚀
2
3
34
Qwen3-VL 30B-A3B at 4-bit precision, running on Apple silicon at 80 tok/s with MLX! @awnihannun @Prince_Canuma @ostensiblyneil @lmstudio
12
27
231
If you use Chrome and have upgraded to macOS Tahoe, disable "experimental prediction for scroll events" at chrome://flags to make it not feel like you just downgraded to a RAM-starved Windows 98 PC. (Thank you, @jespervega!)
88
147
3K
mistralai/magistral-small-2509 > New 24B reasoning model from @MistralAI > Supports 🏞️ image input and 🛠️ tool calling > Available in both GGUF and MLX in LM Studio! https://t.co/JikgYwogfO
lmstudio.ai
Model from MistralAI that supports reasoning, image input, and tools calling.
12
41
315
| ̄ ̄ ̄ ̄ ̄ ̄ ̄ ̄ ̄ ̄ ̄ ̄ ̄ ̄ ̄ | | lms get qwen3-next --mlx | |_______________| (\__/) || (•ㅅ•) || / づ
6
17
304
A simple take on the Transformer: MLP layers are for long-term memory. Attention is for short term memory. The state-of-the-art for efficient MLP layers is the switch-style MoE. The state-of-the-art for efficient attention is likely sliding window attention with sinks. I’m
24
82
1K
Made my first contributions to OSS and to none other than @lmstudio's MLX engine and Apple's mlx-lm package! Thank you to @mattjcly, @ostensiblyneil, and @awnihannun for your help and guidance along the way! https://t.co/EoQpRIjLhC
https://t.co/Cq6Dk0eKIH
github.com
Adds the Qwen2-VL model family for text-only inference. Sibling of lmstudio-ai/mlx-engine#215. Because Qwen2-VL and Qwen2.5-VL have the same language model implementation, the Qwen2.5-VL model fami...
4
8
53
GPT-1 when asked “What would you say if you could talk to a future OpenAI model?” There’s something beautiful about this
137
322
6K