adam_golinski Profile Banner
Adam Golinski  Profile
Adam Golinski

@adam_golinski

Followers
3K
Following
11K
Media
21
Statuses
811

ML research @Apple, prev @OxCSML @InfAtEd, part of @MLinPL & @polonium_org 🇵🇱, sometimes funny

Barcelona
Joined December 2014
Don't wanna be here? Send us removal request.
@adam_golinski
Adam Golinski
11 months
What a great day to open 🦋: @adamgol.*.* it's been way too long #MyXAnniversary
0
0
7
@mkirchhof_
Michael Kirchhof
5 hours
Our research team is hiring PhD interns 🍏 Spend your next summer in Paris and explore the next frontiers of LLMs for uncertainty quantification, calibration, RL and post-training, and Bayesian experimental design. Details & Application ➡️
Tweet card summary image
jobs.apple.com
Apply for a Internship - Machine Learning Research on Uncertainty job at Apple. Read about the role and find out if it’s right for you.
1
20
113
@sineadwilliamso
Sinead Williamson
7 days
📢 We’re looking for a researcher in in cogsci, neuroscience, linguistics, or related disciplines to work with us at Apple Machine Learning Research! We're hiring for a one-year interdisciplinary AIML Resident to work on understanding reasoning and decision making in LLMs. 🧵
8
57
309
@PreetumNakkiran
Preetum Nakkiran
4 days
LLMs are notorious for "hallucinating": producing confident-sounding answers that are entirely wrong. But with the right definitions, we can extract a semantic notion of "confidence" from LLMs, and this confidence turns out to be calibrated out-of-the-box in many settings (!)
22
81
582
@janundnik
Jannik Kossen
10 days
Come do a research internship with us at FAIR Coding in Paris or Tel Aviv 🤗 We've just released CWM ( https://t.co/nLXj35VSZV) and are now looking for strong students interested in working on the next generation of reasoning and coding models. Apply as soon as you can!
1
1
4
@maciejpioro
Maciej Pióro
1 month
(1/n) Introducing KaVa ( https://t.co/xPyMoCtCSE) – the first latent reasoning framework leveraging compressed KV-Cache to guide the latent generation. We beat previous approaches, especially on a realistic, Natural Language GSM8K dataset.
2
5
18
@aakaran31
Aayush Karan
28 days
We found a new way to get language models to reason. 🤯 No RL, no training, no verifiers, no prompting. ❌ With better sampling, base models can achieve single-shot reasoning on par with (or better than!) GRPO while avoiding its characteristic loss in generation diversity.
69
251
2K
@awnihannun
Awni Hannun
1 month
I love this line of research from my colleagues at Apple: Augmenting a language model with a hierarchical memory makes perfect sense for several reasons: - Intuitively the memory parameters should be accessed much less frequently than the weights responsible for reasoning. You
@HPouransari
Hadi Pouransari
1 month
Introducing Pretraining with Hierarchical Memories: Separating Knowledge & Reasoning for On-Device LLM Deployment 💡We propose dividing LLM parameters into 1) anchor (always used, capturing commonsense) and 2) memory bank (selected per query, capturing world knowledge). [1/X]🧵
9
75
702
@HPouransari
Hadi Pouransari
1 month
Introducing Pretraining with Hierarchical Memories: Separating Knowledge & Reasoning for On-Device LLM Deployment 💡We propose dividing LLM parameters into 1) anchor (always used, capturing commonsense) and 2) memory bank (selected per query, capturing world knowledge). [1/X]🧵
11
117
643
@ArwenBradley
Arwen Bradley
2 months
How do diffusion models generate images for prompts like "A cat eating sushi with chopsticks in the style of van Gogh" that were (probably) not seen during training? Models seems to compose known concepts (cat+sushi+style), but how?
1
10
33
@BetleyJan
Jan Betley
4 months
Yeah we did exactly that
@OwainEvans_UK
Owain Evans
4 months
New paper & surprising result. LLMs transmit traits to other models via hidden signals in data. Datasets consisting only of 3-digit numbers can transmit a love for owls, or evil tendencies. 🧵
7
45
1K
@teelinsan
Andrea Santilli
4 months
Uncertainty quantification (UQ) is key for safe, reliable LLMs... but are we evaluating it correctly? 🚨 Our ACL2025 paper finds a hidden flaw: if both UQ methods and correctness metrics are biased by the same factor (e.g., response length), evaluations get systematically skewed
1
17
47
@mkirchhof_
Michael Kirchhof
4 months
I'll present my view on the future of uncertainties in LLMs and vision models at @icmlconf, in penal discussions, posters, and workshops. Reach out if you wanna chat :) Here's everything from me and other folks at Apple: https://t.co/MnSE4anJRS
0
6
30
@MohammadHAmani
masani
5 months
Why does RL struggle with tasks requiring long reasoning chains? Because “bumping into” a correct solution becomes exponentially less likely as the number of reasoning steps grows. We propose an adaptive backtracking algorithm: AdaBack. 1/n
2
12
58
@mkirchhof_
Michael Kirchhof
4 months
Can LLMs access and describe their own internal distributions? With my colleagues at Apple, I invite you to take a leap forward and make LLM uncertainty quantification what it can be. 📄 https://t.co/uhoCJfPdZK 💻 https://t.co/pQY1DfaKtS 🧵1/9
1
21
89
@fbickfordsmith
Freddie Bickford Smith
5 months
There’s a lot of confusion around uncertainty in machine learning. We argue the "aleatoric vs epistemic" view has contributed to this and present a rigorous alternative. #ICML2025 with @janundnik @eleanortrollope @markvanderwilk @adamefoster @tom_rainforth 1/5
1
15
60
@TeresaNHuang
Teresa Huang
5 months
Is the mystery behind the performance of Mamba🐍  keeping you awake at night? We got you covered! Our ICML2025 paper demystifies input selectivity in Mamba from the lens of approximation power, long-term memory, and associative recall capacity. https://t.co/dWDYyIWLzt
1
17
51
@RichardMCNgo
Richard Ngo
10 months
I recently gave a short talk at the International Workshop on Reimagining Democracy. The first half focused on feeling the AGI. The second half briefly outlined a new research direction I'm very excited about: leveraging AI to build unprecedentedly trustworthy institutions.
18
24
289
@MLinPL
ML in PL
7 months
We are happy to welcome our next speaker to MLSS 2025! 🎤 @BarzilayRegina is a School of Engineering Distinguished Professor of AI & Health in the Department of Computer Science and the AI Faculty Lead at MIT Jameel Clinic. She develops machine learning methods for drug
0
1
3
@MartinKlissarov
Martin Klissarov
7 months
Here is an RL perspective on understanding LLMs for decision making. Are LLMs best used as: policies / rewards / transition functions ? How do you fine-tune them ? Can LLMs explore / exploit ? 🧵 Join us down this rabbit hole... (ICLR 2025 paper, done at  ML Research)
2
31
169