EAccelerate_42 Profile Banner
accelerate(e/acc) Profile
accelerate(e/acc)

@EAccelerate_42

Followers
3K
Following
2K
Media
377
Statuses
4K

Software,Hardware,Astrophysics, Web3, Stocks ,AI ,MLX,Cuda .In perplexity over answer to life universe and Everything . Founder at https://t.co/R6wDNPGrVc

Joined July 2013
Don't wanna be here? Send us removal request.
@EAccelerate_42
accelerate(e/acc)
4 months
💡Learn to Fine-tune Gemma 270M model in few seconds on your MacBook locally ✨ New 2025 guide covers: • LoRA, QLoRA, DoRA techniques • Interactive parameter calculator • 2,800+ ready-to-use MLX models • OpenAI GPT-OSS , Gemma 3 & Qwen3 support From zero to
@EAccelerate_42
accelerate(e/acc)
5 months
👀Learn to Build your OWN Language Model in an hour. 🍏 Apple Silicon (#MLX), 🐲 NVIDIA, or good old Intel/AMD CPU—all covered. ⚡️ One-line setup + TinyStories demo. Hack tonight 👉 https://t.co/WO61AnhH8M 🔁 RT if you’d rather create models than just prompt them.
0
1
9
@EAccelerate_42
accelerate(e/acc)
1 day
Glm 4.7 and minimax m2.1 are out First impression glm 4.7 is extreamly good at complex tasks a clear sonnet 4.5 replacement while m2.1 is extreamly good for UI / front end development
0
0
0
@EAccelerate_42
accelerate(e/acc)
1 day
2026 latent space RL pretraining
@rasbt
Sebastian Raschka
2 days
The LLM training eras: 202x Pre-training (foundation) 2022 RLHF + PPO 2023 LoRA SFT 2024 Mid-Training 2025 RLVR + GRPO
0
0
0
@EAccelerate_42
accelerate(e/acc)
1 day
. @Zai_org absolutely cooked 🚀 !!!! Huge improvements over glm 4.6 and I can confidently say better than sonnet 4.5
0
0
0
@EAccelerate_42
accelerate(e/acc)
2 days
We need a page like this for open source drops (apart from hugging face just to keep track ) with some benchmarks
@donvito
Melvin Vivas
4 days
there were so many Gemini releases that they made a page for us to keep track https://t.co/6TBKP6PmkK
0
0
0
@donvito
Melvin Vivas
4 days
there were so many Gemini releases that they made a page for us to keep track https://t.co/6TBKP6PmkK
26
119
2K
@tom_doerr
Tom Dörr
3 days
iOS app with RAG and web search for Apple Intelligence https://t.co/cBctP0CdVm
1
20
105
@EAccelerate_42
accelerate(e/acc)
4 days
Exo 1.0+ RDMA+ MLX distributed = 🔥
@awnihannun
Awni Hannun
4 days
Link to video:
0
0
7
@SkylerMiao7
Skyler Miao
5 days
Huge shoutout to Robin — the amazing PM behind MiniMax Agent. She not only drives the product, but also defines the benchmarks that shape how the Agent actually evolves. Custom Agent is easily my favorite feature so far. Being able to compose multiple sub-agents as different
@RobinJumps
RobinJumps
7 days
We’ve been iterating fast! Here are two highlights from December: 1️⃣ Custom Mode Pro multi-agent users asked for more flexibility, so we shipped it. You can now configure sub-agents with their own context. We’re seeing users share powerful custom agents, and more ready-to-use
3
2
48
@EAccelerate_42
accelerate(e/acc)
5 days
What happened to opus !!!! Extreamly bad performance from yesterday !
0
0
0
@osanseviero
Omar Sanseviero
5 days
Want to try our Tiny Garden and Mobile Actions demos powered by FunctionGemma? Try them directly in your phone https://t.co/Ah6fyvwiFM
6
9
119
@EAccelerate_42
accelerate(e/acc)
6 days
Working extremely fast even with voice clone Build a gradio app for this chatterbox-turbo model https://t.co/EpYDMgJian
@Prince_Canuma
Prince Canuma
6 days
Chatterbox Turbo by @resembleai now on MLX 🚀🎉 You can now run it locally on your Mac and it supports voice cloning and emotion control. I'm getting 3.8x faster than real-time. > pip install -U mlx-audio Model collection 👇🏽
1
0
2
@EAccelerate_42
accelerate(e/acc)
7 days
Huge W Chatterbox is the best open source voice model company
@0xDevShah
Dev Shah
7 days
“If this is the best model, why open source it?” Because the goal isn’t to fight closed-source companies. We want them to succeed. This is a massive market. We can all win. The real advantage of open source isn’t ideology. It’s adoption. Not everyone can (or should) pay
0
0
0
@EAccelerate_42
accelerate(e/acc)
7 days
Awesome
@UnslothAI
Unsloth AI
7 days
You can now fine-tune LLMs and deploy them directly on your phone! 🚀 We collabed with PyTorch so you can export and run your trained model 100% locally on your iOS or Android device. Deploy Qwen3 on Pixel 8 and iPhone 15 Pro at ~40 tokens/sec. Guide: https://t.co/ukkbzycGX6
0
0
1
@SkylerMiao7
Skyler Miao
7 days
I’ve been using M2.1 (latest ckpt for internal test) full-time over the past few days, purely as a developer — not training it, just writing and shipping code with it. It’s not perfect yet, but the jump from M2 → M2.1 is very noticeable. I’m now comfortable relying on it for
39
14
236
@EAccelerate_42
accelerate(e/acc)
7 days
🙌 awesome!!!
@profvjreddi
Vijay Janapa Reddi
8 days
Today I’m sharing Tiny🔥Torch—an educational framework for ML systems, built from scratch. You don’t just train models, you build tensors, autograd, optimizers, and data loaders, and see how design choices affect memory, performance, and efficiency. If you use @PyTorch or
0
0
0
@EAccelerate_42
accelerate(e/acc)
7 days
Woah 🤯
@AIatMeta
AI at Meta
7 days
🔉 Introducing SAM Audio, the first unified model that isolates any sound from complex audio mixtures using text, visual, or span prompts. We’re sharing SAM Audio with the community, along with a perception encoder model, benchmarks and research papers, to empower others to
0
0
0
@SkylerMiao7
Skyler Miao
8 days
This is the first work showing a scaling curve linking visual tokenizers with Diffusion Transformers: better generations without extra compute, just by scaling tokenizer training! Extremely proud and thrilled for my colleagues! @jingfeng_yao @Hailuo_AI
@MiniMax__AI
MiniMax (official)
8 days
MiniMax (Hailuo) Video Team Has Open Sourced VTP (Visual Tokenizer Pre-training)! VTP is a scalable pre-training framework for visual tokenizers, built for next-gen generative models. It challenges the conventional belief in Latent Diffusion Models that scaling the stage-1
0
4
36
@EAccelerate_42
accelerate(e/acc)
8 days
This is huge
@MiniMax__AI
MiniMax (official)
8 days
MiniMax (Hailuo) Video Team Has Open Sourced VTP (Visual Tokenizer Pre-training)! VTP is a scalable pre-training framework for visual tokenizers, built for next-gen generative models. It challenges the conventional belief in Latent Diffusion Models that scaling the stage-1
0
0
0
@0xDevShah
Dev Shah
8 days
Don’t think people fully realize this yet, but we beat China and companies with billions in funding at Voice AI. Our model is faster, cheaper, and higher quality than anything else on the market. And MIT licensed. It effectively ends the intelligence deflation problem in this
@0xDevShah
Dev Shah
8 days
This is the DeepSeek moment for Voice AI. Today we’re releasing Chatterbox Turbo — our state-of-the-art MIT licensed voice model that beats ElevenLabs Turbo and Cartesia Sonic 3! We’re finally removing the trade-offs that have held voice AI back. Fast models sound robotic.
83
120
1K