Fahd Mirza
@fahdmirza
Followers
1K
Following
171
Media
300
Statuses
1K
Helping you become AI Engineer at https://t.co/4q6SeiFFkz | Business inquiries: [email protected]
Australia
Joined March 2007
π NVIDIA just dropped the biggest CUDA update in 20 years. π CUDA Tile lets you write GPU code without managing threads. π₯ No more threadIdx calculations. No manual memory juggling. βοΈ I broke it down in 10 minutes with actual code examples π https://t.co/mIm3wfxEow
0
0
0
π GAME-CHANGER ALERT! VibeVoice-Realtime makes AI talk INSTANTLY! π€― Just 300ms latency & LLMs can speak from their FIRST tokens! No more waiting for complete responses. π Streaming text input β
π€ Single speaker focus β
β‘ Lightning-fast response β
Watch video now:
0
0
1
Apple Surprises with CLaRa-7B: A Useful RAG Model πBreaking: Apple's CLaRa AI revolutionizes RAG! π₯Compresses docs into smart, retrievable embeddings via QA/paraphrase training, β¨οΈthen jointly optimizes retrieval & generation in one continuous space. Full Local Demo:π
0
0
1
@fahdmirza this one is reasoning, I haven't tested but super curious
1
1
6
Meta AI's SAM 3D is Live on fal! You can turn a single image file into an actual 3D Object! β
3D Alignment β
Image to 3D (Human) Body β
Image to 3D Object Link ππ½
3
34
152
π Meet Ministral 3 3B Instruct! A tiny, open-source model designed for the edge. β¨οΈ Vision: Analyze images & text β¨οΈ Local: Runs efficiently in ~8GB VRAM β¨οΈ Multilingual: Supports dozens of languages β¨οΈ Agentic: Native function calling & JSON output β¨οΈ Large Context: 256k
0
0
3
This might just be my favorite release of the year, especially with everything under Apache 2.0! π₯ Which model are you most excited about?
3
2
34
It's here! π Introducing Mistral Large 3, most powerful model yet. A state-of-the-art multimodal MoE with 675B total parameters, 256k context, and frontier-level performance. Best of all? It's open-source under Apache 2.0. Dive in π https://t.co/ikldUUTWD9
@MistralAI
2
1
7
Transformers v5: Simple model definitions powering the AI ecosystem @huggingface
0
0
0
The vision: transformers as the backbone for the entire open AI/ML stack - training, fine-tuning, inference. The future is here. π This format makes the key metrics stand out while keeping it concise and impactful.
1
0
0
Built for scale: β’ PyTorch-native architecture β’ Modular design patterns β’ Quantization-first approach β’ OpenAI-compatible API with Responses support
1
0
0
Transformers v5 is here: The Numbers Don't Lie π π₯ Daily Installs: 20k β 3M+ (150x growth!) ποΈ Architectures: 40 β 400+ supported models π¦ Checkpoints: ~1k β 750k+ available π― Total Installs: 1.2B+ across all versions
1
0
1
Token Intelligence and MoE routing is exceptional in this new version.
0
0
0
π DeepSeek-V3.2 just dropped: β¨ Matches GPT-5, Speciale beats it π₯ Gold medals: IMO, IOI, ICPC 2025 β‘ 60% cheaper on long context (128K tokens) π€ Best open-source coding agent (73% SWE-Verified) π§ Thinks WHILE using tools (game changer) π° 10%+ of pre-training spent on RL
2
0
0
For me the, The Release of Last week was: Z-Image model from @Ali_TongyiLab
0
0
1
As thanksgiving gift @Kimi_Moonshot has given limited free access: https://t.co/4ToMhqQMOZ
0
0
0