Alexander D'Amour (@alexdamour.bsky.social)
@alexdamour
Followers
4K
Following
13K
Media
66
Statuses
2K
Research Scientist at Google Brain. Statistics, Data Science, ML, causality, fairness. Prev at Harvard (PhD), UC Berkeley (VAP). Opinions my own. he/him.
Cambridge, MA
Joined June 2009
An improved version of the Underspecification paper is out at JMLR! https://t.co/0G8ovNY3nc (Late announcement)
NEW from a big collaboration at Google: Underspecification Presents Challenges for Credibility in Modern Machine Learning Explores a common failure mode when applying ML to real-world problems. 🧵 1/14 https://t.co/7vX5D8yMhq
1
5
27
Dusting this example off again in light of some recent education funding discourse
As we all know, in NBA basketball, the distance of the nearest defender has no causal effect on the odds of making a shot. Look, the line is flat! (Example by @nsandholtz, made from from NBA player tracking data; @afranks53, @LukeBornn, and I have returned to it time and again)
0
1
2
Sadly I can't make it myself to #ICML2025, but the amazing @alexdamour is presenting this at West Exhibition Hall B2-B3 W-816, Tuesday 15 Jul 11am - 1.30pm PDT! Come check it out and talk about LLM reward hacking, sample efficiency and low-dimensiomal adaptation with him!
New work! 💪🏻💥🤯 When Can Proxies Improve the Sample Complexity of Preference Learning? Our paper is accepted at @icmlconf 2025. Fantastic joint work with @spectraldani @zhengyan_shi @Mengyue_Yang_ @PMinervini @alexdamour, Matt Kusner. 1/n
1
6
6
New work! 💪🏻💥🤯 When Can Proxies Improve the Sample Complexity of Preference Learning? Our paper is accepted at @icmlconf 2025. Fantastic joint work with @spectraldani @zhengyan_shi @Mengyue_Yang_ @PMinervini @alexdamour, Matt Kusner. 1/n
1
12
43
But is the JEC data that Nate is putting into his regression picking up that variation in any meaningful way? I don't buy it. Because it isn't actually making any attempt to measure state-level differences in inflation. It's just picking up preexisting differences in income.
1
7
53
*causal... Sigh. Autocorrect is no longer used to poasting.
0
0
2
On the substance, my take is that most of his points would be non-controversial if he didn't then try to support them w poorly formalized quantitative "evidence", but that makes debate about how bad the analysis is kinda pointless
1
0
5
I, for one, appreciate the service Nate Silver provides: offering himself up as a punching bag so that casual/stats types like myself can work thru their anxiety around elections
1
0
23
If you're a PhD student interested in interning with me or one of my amazing colleagues at Microsoft Research New England (@MSRNE, @MSFTResearch) this summer, please apply here
6
67
311
Excited to announce that our paper, “A toolbox surfacing health equity harms and biases in large language models” is now published with @NatureMedicine: https://t.co/dm1OKAlfkV.
nature.com
Nature Medicine - Identifying a complex panel of bias dimensions to be evaluated, a framework is proposed to assess how prone large language models are to biased reasoning, with possible...
Identifying a complex panel of bias dimensions to be evaluated, a framework is proposed to assess how prone large language models are to biased reasoning, with possible consequences on equity-related harms.
1
9
59
New reqs for low to high level researcher positions: https://t.co/gba2GrmdSY , https://t.co/gMXUc8jgqH,
https://t.co/XFnLBfTgwk,
https://t.co/yXTOoiWoMK, with postdocs from Akshay and @MiroDudik
https://t.co/4xbdiiZn6b . Please apply or pass to those who may :-)
0
33
108
📣Announcing the 2024 NeurIPS Workshop on Statistical Frontiers in LLMs and Foundation Models 📣 Submissions open now, deadline September 15th https://t.co/Q97EWZcu2T If your work intersects with statistics and black-box models, please submit! This includes: ✅ Bias ✅
2
33
123
LLM best-of-n sampling works great in practice---but why? Turns out: it's the best possible policy for maximizing win rate over the base model! Then: we use this to get a truly sweet alignment scheme: easy tweaks, huge gains w @ybnbxb @ggarbacea
https://t.co/GHaP911Lil
5
20
82
In our new working paper "Dubious Debiasing" presented in today's #CHI2024 HEAL workshop, we argue that modern LLMs like ChatGPT cannot be fair in the ways currently conceived in ML/NLP. We need new context-adaptive methods to tackle the evaluation crisis: https://t.co/4NVaT4CZS2
1
8
23
Proxy methods: not just for causal effect estimation! #aistats24 Adapt to domain shifts in an unobserved latent, with either -concept variables -multiple training domains https://t.co/pPbg5Dk6tz Tsai @stephenpfohl @walesalaudeen96 @nicole_chiou Kusner @alexdamour @sanmikoyejo
0
14
40
📢📢 I am looking for a student researcher to work with me and my colleagues at Google DeepMind Zürich on vision-language research. It will be a 100% 24 weeks onsite position in Switzerland. Reach out to me (xzhai@google.com) if interested. Bonus: amazing view🏔️👇
6
29
243
Excited to share our #ICLR2024 paper, focused on reducing bias in CLIP models. We study the impact of data balancing and come up with some recommendations for how to apply it effectively. Surprising insights included! Here are 3 main takeaways.
3
23
87
In case you missed it @bioptimus_ai: We are looking for the best talents (ML/biology/large-scale infrastructure) to join our fantastic technical team @ZeldaMariet @FelipeLlinares @jeanphi_vert 👉
bioptimus.com
We are building a team of creative minds. Together, we aspire to redefine the landscape of biology with AI, unlocking its potential for everyone.
0
10
19
The @icmlconf 2024 Ethics Chairs, @KLdivergence and @DrLaurenOR, wrote a blog about the ethics review. Helpful for all authors and reviewers at ICML to better understand the process! https://t.co/OnkunUcmCg . . .
medium.com
This post is written by the ICML 2024 ethics chairs: Kristian Lum and Lauren Oakden-Rayner.
0
11
23
This one weird* trick will fix all** your LLM RLHF issues! * not weird ** as long as your issues are about how to combine multiple objectives, and avoid reward hacking
Transforming the reward used in RLHF gives big wins in LLM alignment and makes it easy to combine multiple reward functions! https://t.co/o1PvnpkxrL
@nagpalchirag @JonathanBerant @jacobeisenstein @alexdamour @sanmikoyejo @victorveitch @GoogleDeepMind
0
5
20