artollen Profile Banner
emre Profile
emre

@artollen

Followers
144
Following
2K
Media
9
Statuses
202

msc cs @polimi - bs cs @UniBogazici research: efficientML/quantization/robustness mlx sometimes https://t.co/aqlnNh4zPU

Milano/Italy
Joined October 2023
Don't wanna be here? Send us removal request.
@artollen
emre
17 days
RT @dhruv31415: For a second I thought someone had dropped the most cursed mixed precision strategy.
0
16
0
@artollen
emre
1 month
RT @tenderizzation: cuda:3 cuda:2 cuda:1 cuda:0.
0
7
0
@grok
Grok
19 days
"A stylish woman in a beige coat walking confidently past a moving train at a modern train station.". Create images and videos in seconds with Grok Imagine.
763
732
4K
@artollen
emre
2 months
RT @DAlistarh: Announcing our early work on FP4 inference for LLMs! .- QuTLASS: low-precision kernel support for Blackwell GPUs.- FP-Quant:….
0
37
0
@artollen
emre
2 months
RT @edgeaivision: New Blog Post from NVIDIA: "Introducing NVFP4 for Efficient and Accurate Low-precision Inference"..
0
1
0
@artollen
emre
2 months
Tweet media one
0
8
0
@artollen
emre
3 months
RT @awnihannun: MLX got an official website!
Tweet media one
0
69
0
@artollen
emre
3 months
RT @ivanfioravanti: You have one team that brought back community and excitement to Apple.ecosystem from nothing, without marketing and big….
0
7
0
@artollen
emre
3 months
RT @jxmnop: ## The case for more ambition. i wrote about how AI researchers should ask bigger and simpler questions, and publish fewer pap….
0
96
0
@artollen
emre
3 months
RT @tsengalb99: 📣Introducing our latest work: Yet Another Quantization Algorithm!. YAQA directly minimizes the KL divergence to the origina….
0
27
0
@artollen
emre
3 months
RT @zechunliu: 🚀 We’re releasing ParetoQ, a family of quantized MobileLLMs — ultra-efficient, performance-retaining models for edge devices….
0
1
0
@artollen
emre
3 months
RT @TacoCohen: Nobody wants to hear it, but working on data is more impactful than working on methods or architectures.
0
97
0
@artollen
emre
3 months
RT @DAlistarh: We are introducing Quartet, a fully FP4-native training method for Large Language Models, achieving optimal accuracy-efficie….
0
78
0
@artollen
emre
3 months
RT @kerstingAIML: 🚀 TU Darmstadt leads the new Cluster of Excellence "Reasonable AI" – advancing trustworthy, efficient & adaptive AI groun….
0
15
0
@artollen
emre
3 months
RT @skalskip92: for Basketball AI, I want to read jersey numbers and map them to player names. I'm using RF-DETR + SmolVLM2 combo to do it….
0
46
0
@artollen
emre
3 months
RT @_derek_liu_: New Blog Post! 🚀 Explore how quantization backends in Diffusers make large diffusion models like Flux run with less VRAM w….
0
20
0
@artollen
emre
4 months
today's must read 😍.
@mervenoyann
merve
4 months
VLMS 2025 UPDATE 🔥. We just shipped a blog on everything latest on vision language models, including.🤖 GUI agents, agentic VLMs, omni models.📑 multimodal RAG.⏯️ video LMs.🤏🏻 smol models. and more! . find it on the next one ⤵️
Tweet media one
1
0
3
@artollen
emre
4 months
RT @mervenoyann: VLMS 2025 UPDATE 🔥. We just shipped a blog on everything latest on vision language models, including.🤖 GUI agents, agentic….
0
123
0
@artollen
emre
4 months
RT @xiaozheyao: COLM reviewer guideline is next level. I am touched and cannot agree more. @COLM_conf
Tweet media one
0
15
0
@artollen
emre
4 months
RT @willccbb: learning is compression, things you understand well will always feel simple in hindsight, yet therein lies the highest alpha.
0
9
0
@artollen
emre
4 months
RT @PrunaAI: 🧑‍🏫 All you need to know about AI model optimisation techniques!. In this blog by @davidberenstei, you go over the fundamental….
0
3
0