
Adam Golinski
@adam_golinski
Followers
3K
Following
11K
Media
21
Statuses
800
ML research @Apple, prev @OxCSML @InfAtEd, part of @MLinPL & @polonium_org 🇵🇱, sometimes funny
Barcelona
Joined December 2014
RT @mkirchhof_: I'll present my view on the future of uncertainties in LLMs and vision models at @icmlconf, in penal discussions, posters,….
0
5
0
RT @MohammadHAmani: Why does RL struggle with tasks requiring long reasoning chains?. Because “bumping into” a correct solution becomes exp….
0
9
0
RT @mkirchhof_: Can LLMs access and describe their own internal distributions? With my colleagues at Apple, I invite you to take a leap for….
0
19
0
RT @fbickfordsmith: There’s a lot of confusion around uncertainty in machine learning. We argue the "aleatoric vs epistemic" view has cont….
0
13
0
RT @TeresaNHuang: Is the mystery behind the performance of Mamba🐍 keeping you awake at night? We got you covered! Our ICML2025 paper demys….
0
15
0
RT @RichardMCNgo: I recently gave a short talk at the International Workshop on Reimagining Democracy. The first half focused on feeling t….
0
25
0
RT @MLinPL: We are happy to welcome our next speaker to MLSS 2025!. 🎤 @BarzilayRegina is a School of Engineering Distinguished Professor of….
0
1
0
RT @MartinKlissarov: Here is an RL perspective on understanding LLMs for decision making. Are LLMs best used as: .policies / rewards / tra….
0
27
0
RT @RinMetcalfSusa: 🚀 We're hiring an ML Researcher! 🚀. If you're an expert in LLM alignment & personalization and want to work on a world-….
lnkd.in
This link will take you to a page that’s not on LinkedIn
0
11
0
RT @aakaran31: Can machine learning models predict their own errors 🤯 ?. In a new preprint w/ @Apple collaborators Aravind Gollakota, Parik….
0
134
0
RT @ArwenBradley: When does composition of diffusion models “work”? Prior work (Du et al., 2023; Liu et al., 2022) has shown that compositi….
0
33
0
RT @danbusbridge: Reading "Distilling Knowledge in a Neural Network" left me fascinated and wondering:. "If I want a small, capable model,….
arxiv.org
We provide a distillation scaling law that estimates distilled model performance based on a compute budget and its allocation between the student and teacher. Our findings reduce the risks...
0
147
0
RT @BlackHC: Have you wondered why I've posted all these nice plots and animations? 🤔. Well, the slides for my lectures on (Bayesian) Activ….
0
63
0
RT @eugene_ndiaye: MLSS is coming to Senegal 🇸🇳 in 2025! 🌍. 📍 AIMS Mbour, Senegal.📅 June 23 - July 4, 2025. An international summer school….
0
34
0
RT @zhaisf: We attempted to make Normalizing Flows work really well, and we are happy to report our findings in paper .
0
44
0
RT @DonkeyShot21: We release AIMv2, the second iteration of the AIM family of large autoregressive vision encoders. This time we bring mult….
0
35
0
RT @prlz77: I’m thrilled to announce 3 #internship openings @Apple ML Research in beautiful ☀️ #Barcelona ☀️ for 2025! Two internships on G….
0
79
0
RT @maartjeterhoeve: 📢 Today at #EMNLP2024, we will present:. On the Limited Generalization Capability of the Implicit Reward Model Induced….
0
11
0