
Axolotl
@axolotl_ai
Followers
2K
Following
77
Media
10
Statuses
89
Axolotl is the premier open source LLM fine tuning framework. find us on discord https://t.co/wlcE2wlJa9
San Francisco, CA
Joined December 2023
DenseMixer:
๐ตโ๐ซ Struggling with ๐๐ข๐ง๐-๐ญ๐ฎ๐ง๐ข๐ง๐ ๐๐จ๐?. Meet ๐๐๐ง๐ฌ๐๐๐ข๐ฑ๐๐ซ โ an MoE post-training method that offers more ๐ฉ๐ซ๐๐๐ข๐ฌ๐ ๐ซ๐จ๐ฎ๐ญ๐๐ซ ๐ ๐ซ๐๐๐ข๐๐ง๐ญ, making MoE ๐๐๐ฌ๐ข๐๐ซ ๐ญ๐จ ๐ญ๐ซ๐๐ข๐ง and ๐๐๐ญ๐ญ๐๐ซ ๐ฉ๐๐ซ๐๐จ๐ซ๐ฆ๐ข๐ง๐ !. Blog:
0
0
1
ALST:
My first project at @Snowflake AI Research is complete! . I present to you Arctic Long Sequence Training (ALST) . Paper: Blog: ALST is a set of modular, open-source techniques that enable training on sequences up to 15 million
0
1
2
RT @RedHat_AI: ๐จ Introducing the Axolotl-LLM Compressor integration, designed to make fine-tuning sparse models easier and more efficient tโฆ.
0
14
0
Are inference costs for serving your fine-tuned models on your mind? With @axolotl_ai and @RedHat_AI's LLM-Compressor you can now fine-tune sparsified LLMs for up to:.- ๐ฏ >99% accuracy recovery.- ๐ 5x smaller models.- ๐ 3x faster inference.
1
5
13
RT @capetorch: Training qwe3 with qwen2.5 template works just fine. Way easier to make everything work, and just one line change in @axolotโฆ.
0
1
0
RT @gokoyeb: Looking to fine-tune models?. Meet @axolotl_ai ๐ an open-source tool that simplifies the entire fine-tuning pipeline. Learn hoโฆ.
koyeb.com
Learn how to fine-tune Llama 3 using Axolotl. This hands-on guide covers setup, configuration with YAML, LoRA/QLoRA methods, and fine-tuning with serverless GPUs.
0
3
0
Hacker house was in full force last week for @aiDotEngineer with events with @runpod_io plus collab with @googlecloud. sailing with team + @capetorch visits @SHACK15sf . โต๏ธ๐ฅ ๐.
1
1
5
RT @sophiamyang: @MistralAI @64 Magistral Small is an open-weight model, and is available for self-deployment under the Apache 2.0 license.โฆ.
0
2
0
RT @winglian: Using @googlecloud ๐ค @axolotl_ai can help you streamline your large Multimodal finetuning workflows.
0
3
0
Deploy Axolotl with @googlecloud for your production workloads with simple configuration based orchestration.
Using @googlecloud ๐ค @axolotl_ai can help you streamline your large Multimodal finetuning workflows.
0
5
8
RT @runpod_io: Spot the runpod truck in sf and get $250 in gpu credits. Snap a pic + tag @runpod_io and we'll send you a credit code ๐ค. Hinโฆ.
0
3
0
RT @iScienceLuvr: Model Merging in Pre-training of Large Language Models. "We present the Pre-trained Model Averaging (PMA) strategy, a novโฆ.
0
83
0
RT @winglian: Come join us on June 8th at Shack15 to accelerate AI on AMD, ARM, and other accelerators!.
0
4
0