Explore tweets tagged as #Sparse
We will be presenting Self-Distilled Sparse Drafters at #ICML this afternoon at the Efficient Systems for Foundation Models workshop. Come chat with us about acclerating speculative decoding! 🚀. @CerebrasSystems @UCalgaryML
0
4
8
Announcing our EXAIT@ICML workshop paper: CURATE!. Have a difficult target task distribution with sparse rewards that you want to train an RL agent to complete? 🤔. We tackle this problem using our curriculum learning algorithm, CURATE. 🎓. Link: 1/6
1
6
19
🚨 What happens under the hood when you fine-tune or unlearn LLMs?. We introduce MNEME—a sparse model diffing method that predicts & explains unintended side effects (like emergent toxicity or forgotten knowledge) without needing fine-tuning data. @ActInterp #ICML July 19 Poster
1
1
6
A comprehensive and nicely delivered tutorial by @macavaney et al. on learned sparse retrieval methods and their applications across various tasks. Slides and other materials are available via the link shown in the last image. #SIGIR2025
1
3
21
Checkout our new paper: Video-RTS 🎥.A data-efficient RL method for complex video reasoning tasks. 🔹 Pure RL w/ output-based rewards. 🔹 Novel sparse-to-dense Test-Time Scaling (TTS) to expand input frames via self-consistency. 💥 96.4% less training data!.More in the thread👇.
🚨Introducing Video-RTS: Resource-Efficient RL for Video Reasoning with Adaptive Video TTS! . While RL-based video reasoning with LLMs has advanced, the reliance on large-scale SFT with extensive video data and long CoT annotations remains a major bottleneck. Video-RTS tackles
0
3
8
#Garlasco ma quindi non sono mai state analizzate le mutandine sporche sparse per casa? 4 erano nella busta sul divano. Due sulla scrivania di Chiara, due sul lavandino del bagno, due sul bordo vasca. Mai state analizzate? Assurdo, assurdo. La camicia rossa? Mai analizzata.
9
0
14
🌌🛰️🔭Want to explore universal visual features? Check out our interactive demo of concepts learned from our #ICML2025 paper "Universal Sparse Autoencoders: Interpretable Cross-Model Concept Alignment". Come see our poster at 4pm on Tuesday in East Exhibition hall A-B, E-1208!
🌌🛰️Wanna know which features are universal vs unique in your models and how to find them? Excited to share our preprint: "Universal Sparse Autoencoders: Interpretable Cross-Model Concept Alignment"! . (1/9)
3
46
230