UnslothAI Profile Banner
Unsloth AI Profile
Unsloth AI

@UnslothAI

Followers
34K
Following
5K
Media
117
Statuses
452

Open source LLM fine-tuning & RL! πŸ¦₯ https://t.co/2kXqhhvLsb

San Francisco, CA
Joined November 2023
Don't wanna be here? Send us removal request.
@UnslothAI
Unsloth AI
2 months
Can a 1-bit or 3-bit quantized model outperform GPT-4.1 or Claude-Opus-4? Yes! Today, we're excited to show how LLMs like DeepSeek-V3.1 can be quantized to just 1-bit or 3-bit, and still beat SOTA models like Claude-Opus-4 (thinking) on Aider Polyglot. Details and blog below!
41
202
1K
@UnslothAI
Unsloth AI
2 days
You can now fine-tune DeepSeek-OCR with our free notebook! We fine-tuned DeepSeek-OCR, improving its language understanding by 89%, and reduced Character Error Rate from 149% to 60% Blog: https://t.co/M66jBNDkVZ GitHub: https://t.co/aZWYAtakBP Colab:
Tweet card summary image
colab.research.google.com
Run, share, and edit Python notebooks
13
256
2K
@Alibaba_Qwen
Qwen
4 days
You can now run Qwen3-VL locally with Unsloth AI. πŸ‘‡Fine-tune & RL via free notebooks.
@UnslothAI
Unsloth AI
6 days
You can now run Qwen3-VL locally! πŸ’œ Run the 235B variant for SOTA vision/OCR on 128GB unified memory (dynamic 4-bit). Includes our chat template fixes. Qwen3-VL-2B runs at ~40 t/s on 4GB RAM. Fine-tune & RL via Unsloth free notebooks & export to GGUF. https://t.co/L5sOjsgYhm
21
71
584
@UnslothAI
Unsloth AI
6 days
To run Qwen3-VL, you can read our step-by-step tutorial and download the GGUFs from our Hugging Face collection:
Tweet card summary image
huggingface.co
0
3
30
@UnslothAI
Unsloth AI
6 days
You can now run Qwen3-VL locally! πŸ’œ Run the 235B variant for SOTA vision/OCR on 128GB unified memory (dynamic 4-bit). Includes our chat template fixes. Qwen3-VL-2B runs at ~40 t/s on 4GB RAM. Fine-tune & RL via Unsloth free notebooks & export to GGUF. https://t.co/L5sOjsgYhm
26
102
594
@UnslothAI
Unsloth AI
10 days
We teamed up with @NVIDIA to teach you how to fine-tune LLMs on Blackwell & RTX 50 GPUs. Unsloth makes training on Blackwell up to 2Γ— faster with 70% less VRAM - no accuracy loss. Learn how to use our new Docker image & more in the official NVIDIA Blog: https://t.co/IwIL7cpC9w
22
88
638
@NVIDIAAIDev
NVIDIA AI Developer
14 days
Bring high-accuracy and efficient models to RTX AI PCs with @UnslothAI Now with Quantization-Aware Training, use less VRAM while maintaining accuracy and improving performance.
@UnslothAI
Unsloth AI
15 days
You can now quantize LLMs to 4-bit and recover 70% accuracy via Quantization-Aware Training. We teamed up with @PyTorch to show how QAT enables: β€’ 4x less VRAM with no inference overhead β€’ 1-3% increase in raw accuracy (GPQA, MMLU Pro) Notebook & Blog: https://t.co/2OP1KgvQDN
0
17
110
@UnslothAI
Unsloth AI
14 days
We showcased our one click fine-tuning UI for the first time at the NVIDIA x Mistral AI x Unsloth event at Y Combinator! πŸ”₯πŸ¦₯ Huge thanks to everyone who came! πŸ₯°
@NVIDIAAIDev
NVIDIA AI Developer
14 days
πŸ™Œ Thank you to everyone who joined us at AI Dev Night with @UnslothAI and @MistralAI. We're looking forward to meeting more of you at #PyTorchCon #OpenSourceAIWeek.
6
14
172
@NVIDIAAIDev
NVIDIA AI Developer
14 days
πŸ™Œ Thank you to everyone who joined us at AI Dev Night with @UnslothAI and @MistralAI. We're looking forward to meeting more of you at #PyTorchCon #OpenSourceAIWeek.
4
16
90
@bhutanisanyam1
Sanyam Bhutani
14 days
OpenEnvs for Reinforcement Learning! πŸ™ We are launching a universal RL Environment interface today, teaming up with @huggingface and @UnslothAI Let’s take a trip down memory lane: It’s 2016, you read some papers. RL looks promising. But the reality? Cartpole is best we
3
17
192
@UnslothAI
Unsloth AI
15 days
You can now quantize LLMs to 4-bit and recover 70% accuracy via Quantization-Aware Training. We teamed up with @PyTorch to show how QAT enables: β€’ 4x less VRAM with no inference overhead β€’ 1-3% increase in raw accuracy (GPQA, MMLU Pro) Notebook & Blog: https://t.co/2OP1KgvQDN
20
88
518
@UnslothAI
Unsloth AI
16 days
We just hit 100 million lifetime downloads on Hugging Face! πŸ¦₯πŸ€— Huge thanks to all of you! The amazing community, model creators, and HF team. πŸ’–
15
27
335
@Alibaba_Qwen
Qwen
16 days
Huge thanks to @UnslothAI for enabling free, easy fine-tuning of Qwen3-VL (8B)! πŸ™Œ
@UnslothAI
Unsloth AI
21 days
You can now fine-tune Qwen3-VL (8B) for free with our notebook! Unsloth trains VLMs 1.7x faster with 60% less VRAM and 8x longer context - no accuracy loss. GitHub: https://t.co/aZWYAt9MMh Qwen3-VL GRPO Colab: https://t.co/HkjYydXDnR Qwen3-VL Colab:
15
40
612
@UnslothAI
Unsloth AI
21 days
You can now fine-tune Qwen3-VL (8B) for free with our notebook! Unsloth trains VLMs 1.7x faster with 60% less VRAM and 8x longer context - no accuracy loss. GitHub: https://t.co/aZWYAt9MMh Qwen3-VL GRPO Colab: https://t.co/HkjYydXDnR Qwen3-VL Colab:
Tweet card summary image
colab.research.google.com
Run, share, and edit Python notebooks
11
138
897
@UnslothAI
Unsloth AI
22 days
You can now train models up to 200B parameters locally on NVIDIA DGX Spark with Unsloth! πŸ¦₯ Fine-tune, RL & deploy OpenAI gpt-oss-120b via our free notebook in 68GB unified memory: https://t.co/4ujiAXIpBt Read our step-by-step guide in collab with NVIDIA https://t.co/J6tJ1Rk7tW
26
120
828
@UnslothAI
Unsloth AI
28 days
Thank you @dkundel from OpenAI and Barath from NVIDIA for the collab. πŸ₯° Watch Dominik's full gpt-oss presentation:
2
2
35
@UnslothAI
Unsloth AI
28 days
OpenAI shows how gpt-oss can autonomously beat 2048 using reinforcement learning (RL). Training was done locally with Unsloth on NVIDIA DGX Spark. You can also do it free on Colab. πŸ¦₯ OpenAI DevDay notebook: https://t.co/ptsilrBosy
18
132
1K
@dkundel
dominik kundel
29 days
My talk from OpenAI DevDay 2025 is live! Learn more about gpt-oss, how it fits into the broader OpenAI ecosystem, how to combine it with GPT-5, or use reinforcement fine-tuning with @UnslothAI. All wrapped up with a guest appearance of the @NVIDIAAIDev DGX Spark!
11
12
90
@AIatAMD
AI at AMD
1 month
Join the Synthetic Data AI Agents Challenge!​ πŸš€A two-day virtual hackathon, hosted by @UnslothAI, AMD and @PyTorch, to build and battle AI agents! Did we mention the top team can win $3,000!​ ​ πŸ“† When? October 18–20 ​​ ​Finals, judging, and award ceremony will be held during
3
13
62
@UnslothAI
Unsloth AI
1 month
We made a free notebook that fine-tunes IBM Granite 4.0 into a powerful support agent! This agent will enable real-time analysis & solving of customer interactions. You'll also learn how to train models using data from Google Sheets. Colab Notebook: https://t.co/GQSMdXmSwO
11
100
623
@UnslothAI
Unsloth AI
1 month
IBM releases Granite-4.0, their new series of open models! Run the 'Micro' 3B model on 4GB RAM or 'Small' 32B on 40GB RAM. Granite-4.0 excels at agentic tasks, doc analysis, RAG, edge AI applications & more! Dynamic GGUFs: https://t.co/uK9KoaqLbw Guide: https://t.co/q9TW0k7hMd
21
131
784