accelerate(e/acc)
@EAccelerate_42
Followers
3K
Following
2K
Media
377
Statuses
4K
Software,Hardware,Astrophysics, Web3, Stocks ,AI ,MLX,Cuda .In perplexity over answer to life universe and Everything . Founder at https://t.co/R6wDNPGrVc
Joined July 2013
💡Learn to Fine-tune Gemma 270M model in few seconds on your MacBook locally ✨ New 2025 guide covers: • LoRA, QLoRA, DoRA techniques • Interactive parameter calculator • 2,800+ ready-to-use MLX models • OpenAI GPT-OSS , Gemma 3 & Qwen3 support From zero to
👀Learn to Build your OWN Language Model in an hour. 🍏 Apple Silicon (#MLX), 🐲 NVIDIA, or good old Intel/AMD CPU—all covered. ⚡️ One-line setup + TinyStories demo. Hack tonight 👉 https://t.co/WO61AnhH8M 🔁 RT if you’d rather create models than just prompt them.
0
1
9
Glm 4.7 and minimax m2.1 are out First impression glm 4.7 is extreamly good at complex tasks a clear sonnet 4.5 replacement while m2.1 is extreamly good for UI / front end development
0
0
0
. @Zai_org absolutely cooked 🚀 !!!! Huge improvements over glm 4.6 and I can confidently say better than sonnet 4.5
0
0
0
We need a page like this for open source drops (apart from hugging face just to keep track ) with some benchmarks
there were so many Gemini releases that they made a page for us to keep track https://t.co/6TBKP6PmkK
0
0
0
there were so many Gemini releases that they made a page for us to keep track https://t.co/6TBKP6PmkK
26
119
2K
Huge shoutout to Robin — the amazing PM behind MiniMax Agent. She not only drives the product, but also defines the benchmarks that shape how the Agent actually evolves. Custom Agent is easily my favorite feature so far. Being able to compose multiple sub-agents as different
We’ve been iterating fast! Here are two highlights from December: 1️⃣ Custom Mode Pro multi-agent users asked for more flexibility, so we shipped it. You can now configure sub-agents with their own context. We’re seeing users share powerful custom agents, and more ready-to-use
3
2
48
What happened to opus !!!! Extreamly bad performance from yesterday !
0
0
0
Want to try our Tiny Garden and Mobile Actions demos powered by FunctionGemma? Try them directly in your phone https://t.co/Ah6fyvwiFM
6
9
119
Working extremely fast even with voice clone Build a gradio app for this chatterbox-turbo model https://t.co/EpYDMgJian
Chatterbox Turbo by @resembleai now on MLX 🚀🎉 You can now run it locally on your Mac and it supports voice cloning and emotion control. I'm getting 3.8x faster than real-time. > pip install -U mlx-audio Model collection 👇🏽
1
0
2
Huge W Chatterbox is the best open source voice model company
“If this is the best model, why open source it?” Because the goal isn’t to fight closed-source companies. We want them to succeed. This is a massive market. We can all win. The real advantage of open source isn’t ideology. It’s adoption. Not everyone can (or should) pay
0
0
0
Awesome
You can now fine-tune LLMs and deploy them directly on your phone! 🚀 We collabed with PyTorch so you can export and run your trained model 100% locally on your iOS or Android device. Deploy Qwen3 on Pixel 8 and iPhone 15 Pro at ~40 tokens/sec. Guide: https://t.co/ukkbzycGX6
0
0
1
I’ve been using M2.1 (latest ckpt for internal test) full-time over the past few days, purely as a developer — not training it, just writing and shipping code with it. It’s not perfect yet, but the jump from M2 → M2.1 is very noticeable. I’m now comfortable relying on it for
39
14
236
🙌 awesome!!!
Today I’m sharing Tiny🔥Torch—an educational framework for ML systems, built from scratch. You don’t just train models, you build tensors, autograd, optimizers, and data loaders, and see how design choices affect memory, performance, and efficiency. If you use @PyTorch or
0
0
0
This is the first work showing a scaling curve linking visual tokenizers with Diffusion Transformers: better generations without extra compute, just by scaling tokenizer training! Extremely proud and thrilled for my colleagues! @jingfeng_yao @Hailuo_AI
MiniMax (Hailuo) Video Team Has Open Sourced VTP (Visual Tokenizer Pre-training)! VTP is a scalable pre-training framework for visual tokenizers, built for next-gen generative models. It challenges the conventional belief in Latent Diffusion Models that scaling the stage-1
0
4
36
Don’t think people fully realize this yet, but we beat China and companies with billions in funding at Voice AI. Our model is faster, cheaper, and higher quality than anything else on the market. And MIT licensed. It effectively ends the intelligence deflation problem in this
This is the DeepSeek moment for Voice AI. Today we’re releasing Chatterbox Turbo — our state-of-the-art MIT licensed voice model that beats ElevenLabs Turbo and Cartesia Sonic 3! We’re finally removing the trade-offs that have held voice AI back. Fast models sound robotic.
83
120
1K