Explore tweets tagged as #SparseModels
@GTechCouncil
Global Tech Council
4 months
Tech terms decoded! 🛠️. Attention techies, it’s time for #TermOfTheDay. Today, we are learning about: Sparse Models! ⚡. #TechTerms #SparseModels #AI #MachineLearning #DeepLearning #TechEducation
Tweet media one
0
0
1
@ulasbagci
ulas bagci
7 years
@KhosravanNaji
Naji Khosravan
7 years
Happy to announce that our work “A Collaborative Computer Aided Diagnosis (C-CAD) System with Eye-Tracking, Sparse Attentional Model, and Deep Learning” in collaboration with NIH clinical center is accepted for publication at Medical Image Analysis journal (MedIA). @ulasbagci.
0
0
3
@AiCompetence
AI CompeteNCE📍
10 months
When it comes to machine learning and AI models, there’s a familiar challenge: scaling. #AIbottlenecks #AImodeloptimization #AIscaling #efficientAItraining #GRINMoE #scalingchallenges #sparsemodels #SparseMixerv2.
Tweet media one
0
1
0
@aitoptools
AITopTools
2 years
#AI & #MachineLearning need to converge well. Check out this new theory that could make this possible! 🤔 #SparseModels
0
0
0
@Saif_BinSafwan
سيف محمد بن صفوان | Saif Bin Safwan
8 months
أعلنت Neural Magic عن نموذج اللغة Sparse Llama 3.1 8B، أصغر حجماً وأكثر كفاءة من سابقه. يهدف النموذج الجديد إلى جعل تقنيات الذكاء الاصطناعي في متناول الجميع، حيث يمكن تشغيله بأجهزة أقل قوة. #AI #MachineLearning #SparseModels #NeuralMagic. #Llama_3_1_8B .
0
2
2
@h2oai
H2O.ai
8 years
@Stanford H2O.ai advisors, Trevor Hastie & Rob Tibshirani, are holding a 2-day course in #MachineLearning #DeepLearning #SparseModels.
0
0
3
@RedHat_AI
Red Hat AI
8 months
@vllm_project Download Sparse Llama: See benchmarks and our approach: Thanks to @_EldarKurtic, Alexdre Marques, @markurtz_, @DAlistarh, Shubhra Pandit & the Neural Magic team for always enabling efficient AI!. #SparseModels #OpenSourceAI #vLLM.
0
0
2
@FinSentim
FinSentim
4 years
Jonathan Schwarz et al. introduce #Powerpropagation, a new weight-parameterisation for #neuralnetworks that leads to inherently #sparsemodels. Exploiting the behavior of gradient descent, their method gives rise to weight updates exhibiting a "rich get richer" dynamic.
@_akhaliq
AK
4 years
Powerpropagation: A sparsity inducing weight reparameterisation.pdf: abs: a new weight-parameterisation for neural networks that leads to inherently sparse models
Tweet media one
0
0
1
@wtf_techtonic
TechTonic
11 months
Read more about this exciting finding and its implications for AI development: #artificialintelligence #deeplearning #sparsemodels.
0
0
0
@FinSentim
FinSentim
4 years
@sarahookr, @KaliTessera, and Benjamin Rosman.take a broader view of training #sparsnetworks and consider the role of regularization, optimization, and architecture choices on #sparsemodels. They propose a simple experimental framework, #SameCapacitySparse vs #DenseComparison.
@sarahookr
Sara Hooker
4 years
Tomorrow at @ml_collective DLTC reading group, @KaliTessera will be presenting our work on how initialization is only one piece of the puzzle for training sparse networks. Can taking a wider view of model design choices unlock sparse training?.
Tweet media one
0
2
2
@krlmlr
Kirill Müller 🇺🇦 @[email protected]
9 years
@MMaechler Noticed that the sparseModels vignette is a bit dated and conservative -- interactions work just fine for me.
0
0
0