Explore tweets tagged as #ModelCompression
Model compression in ICCLOUD! 8 - bit Quantization cuts memory by 75%, Weight Pruning reduces compute by 50%. Save resources! #ModelCompression #Resources
55
155
449
Introducing Nota 🇰🇷 . AI model compression technology that makes deep learning models run faster with less computing power. They help companies deploy AI on mobile devices without sacrificing performance. Efficiency-focused AI innovation. #AI #ModelCompression #MobileAI
0
0
2
CommVQ (ICML ’25) shows an 8× KV-cache reduction with 2-bit quantization—and even 1-bit with only minor quality loss—letting an 8 B LLaMA-3.1 handle 128 K tokens on a single RTX 4090 (24 GB). Paper ▶︎ #ICML2025 #LLM #KVcache #ModelCompression.
0
0
1
Optimizing AI Models for Local Devices: Balancing Privacy and Performance.#AIModelOptimization #LocalDevices #PrivacyVsPerformance #ModelCompression #AIChallenges #DeviceProcessingPower #DataPrivacy #ModelQuality #AIOnDevices #AIInnovation
0
1
3
📚🤖 Knowledge Distillation = small AI model learning from a big one! Smarter, faster, efficient. Perfect for NLP & vision! 🚀📱. See here - #KnowledgeDistillation #AI2025 #DeepLearning #TechChilli #ModelCompression
0
0
1
AI model compression isn't just a technical refinement but a strategic choice that aligns cost reduction, sustainability, and operational agility with the pressing demands of today's rapidly evolving digital landscape. By @antgrasso #AI #ModelCompression #Efficiency
0
2
9
Today's #PerfiosAITechTalk talks about how #ModelCompression can be used for efficient on-device runtimes. #NeuralNetworks #Datascience #ML #AI
0
0
1
📢 Our paper "Quantifying Knowledge Distillation Using Partial Information Decomposition" will be presented at #AISTATS2025 on May 5th, Poster Session 3! Our work brings together #modelcompression and #explainability through the lens of #informationtheory.Link:
1
5
18
The four common #ModelCompression techniques:.1) Quantization.2) Pruning .3) Knowledge distillation.4) Lower rank matrix factorization. What does your experience point you toward? #NeuralNetworks #DataScience #PerfiosAITechTalk
0
0
0
RT Model Compression: A Look into Reducing Model Size #machinelearning #modelcompression #tinyml #deeplearning
0
1
0
RT How to adapt a multilingual T5 model for a single language #nlp #transformers #modelcompression #machinelearning #t5
0
1
0
AI model compression isn't just a technical refinement but a strategic choice that aligns cost reduction, sustainability, and operational agility with the pressing demands of today's rapidly evolving digital landscape. By @antgrasso #AI #ModelCompression #Efficiency
1
2
6
🧠💡 Cómo hacer que #ChatGPT quepa en tu bolsillo con inspiración cuántica. En el #BlogdeExpertosITCL nuestro experto en #computacióncuántica nos da las claves para poder hacerlo. 📉📈. #IA #LLM #ModelCompression #TensorNetworks #ChatGPT #EdgeAI .
0
1
1
#mdpientropy "Population Risk Improvement with Model Compression: An Information-Theoretic Approach" #empiricalrisk.#generalizationerror.#modelcompression.#populationrisk
0
0
3
Georgia Tech & #Microsoft Reveal ‘Super Tickets’ in Pretrained Language Models: Improving Model Compression and Generalization | #AI #ML #ArtificialIntelligence #MachineLearning #DeepNeuralNetworks #LanguageModel #ModelCompression
0
1
4
What do you mean by #ModelCompression? The answer is here, courtesy #NeuralNetworks expert @bsourav29 #PerfiosAITechTalk
0
0
0
AI model compression isn't just a technical refinement but a strategic choice that aligns cost reduction, sustainability, and operational agility with the pressing demands of today's rapidly evolving digital landscape. Microblog by @antgrasso #AI #ModelCompression #Efficiency
6
57
68
AI model compression isn't just a technical refinement but a strategic choice that aligns cost reduction, sustainability, and operational agility with the pressing demands of today's rapidly evolving digital landscape. rt @antgrasso #AI #ModelCompression #Efficiency
0
1
4