Alexander D'Amour (@alexdamour.bsky.social) Profile
Alexander D'Amour (@alexdamour.bsky.social)

@alexdamour

Followers
4K
Following
13K
Media
66
Statuses
2K

Research Scientist at Google Brain. Statistics, Data Science, ML, causality, fairness. Prev at Harvard (PhD), UC Berkeley (VAP). Opinions my own. he/him.

Cambridge, MA
Joined June 2009
Don't wanna be here? Send us removal request.
@alexdamour
Alexander D'Amour (@alexdamour.bsky.social)
3 years
An improved version of the Underspecification paper is out at JMLR! https://t.co/0G8ovNY3nc (Late announcement)
@alexdamour
Alexander D'Amour (@alexdamour.bsky.social)
5 years
NEW from a big collaboration at Google: Underspecification Presents Challenges for Credibility in Modern Machine Learning Explores a common failure mode when applying ML to real-world problems. 🧵 1/14 https://t.co/7vX5D8yMhq
1
5
27
@alexdamour
Alexander D'Amour (@alexdamour.bsky.social)
20 days
Dusting this example off again in light of some recent education funding discourse
@alexdamour
Alexander D'Amour (@alexdamour.bsky.social)
5 years
As we all know, in NBA basketball, the distance of the nearest defender has no causal effect on the odds of making a shot. Look, the line is flat! (Example by @nsandholtz, made from from NBA player tracking data; @afranks53, @LukeBornn, and I have returned to it time and again)
0
1
2
@_zhuyuchen
Yuchen Zhu
5 months
Sadly I can't make it myself to #ICML2025, but the amazing @alexdamour is presenting this at West Exhibition Hall B2-B3 W-816, Tuesday 15 Jul 11am - 1.30pm PDT! Come check it out and talk about LLM reward hacking, sample efficiency and low-dimensiomal adaptation with him!
@_zhuyuchen
Yuchen Zhu
7 months
New work! 💪🏻💥🤯 When Can Proxies Improve the Sample Complexity of Preference Learning? Our paper is accepted at @icmlconf 2025. Fantastic joint work with @spectraldani @zhengyan_shi @Mengyue_Yang_ @PMinervini @alexdamour, Matt Kusner. 1/n
1
6
6
@_zhuyuchen
Yuchen Zhu
7 months
New work! 💪🏻💥🤯 When Can Proxies Improve the Sample Complexity of Preference Learning? Our paper is accepted at @icmlconf 2025. Fantastic joint work with @spectraldani @zhengyan_shi @Mengyue_Yang_ @PMinervini @alexdamour, Matt Kusner. 1/n
1
12
43
@bencasselman
Ben Casselman
1 year
But is the JEC data that Nate is putting into his regression picking up that variation in any meaningful way? I don't buy it. Because it isn't actually making any attempt to measure state-level differences in inflation. It's just picking up preexisting differences in income.
1
7
53
@alexdamour
Alexander D'Amour (@alexdamour.bsky.social)
1 year
*causal... Sigh. Autocorrect is no longer used to poasting.
0
0
2
@alexdamour
Alexander D'Amour (@alexdamour.bsky.social)
1 year
On the substance, my take is that most of his points would be non-controversial if he didn't then try to support them w poorly formalized quantitative "evidence", but that makes debate about how bad the analysis is kinda pointless
1
0
5
@alexdamour
Alexander D'Amour (@alexdamour.bsky.social)
1 year
I, for one, appreciate the service Nate Silver provides: offering himself up as a punching bag so that casual/stats types like myself can work thru their anxiety around elections
1
0
23
@LesterMackey
Lester Mackey
1 year
If you're a PhD student interested in interning with me or one of my amazing colleagues at Microsoft Research New England (@MSRNE, @MSFTResearch) this summer, please apply here
6
67
311
@stephenpfohl
Stephen Pfohl
1 year
Excited to announce that our paper, “A toolbox surfacing health equity harms and biases in large language models” is now published with @NatureMedicine: https://t.co/dm1OKAlfkV.
Tweet card summary image
nature.com
Nature Medicine - Identifying a complex panel of bias dimensions to be evaluated, a framework is proposed to assess how prone large language models are to biased reasoning, with possible...
@NatureMedicine
Nature Medicine
1 year
Identifying a complex panel of bias dimensions to be evaluated, a framework is proposed to assess how prone large language models are to biased reasoning, with possible consequences on equity-related harms.
1
9
59
@JohnCLangford
John Langford
1 year
New reqs for low to high level researcher positions: https://t.co/gba2GrmdSY , https://t.co/gMXUc8jgqH, https://t.co/XFnLBfTgwk, https://t.co/yXTOoiWoMK, with postdocs from Akshay and @MiroDudik https://t.co/4xbdiiZn6b . Please apply or pass to those who may :-)
0
33
108
@ml_angelopoulos
Anastasios Nikolas Angelopoulos
1 year
📣Announcing the 2024 NeurIPS Workshop on Statistical Frontiers in LLMs and Foundation Models 📣 Submissions open now, deadline September 15th https://t.co/Q97EWZcu2T If your work intersects with statistics and black-box models, please submit! This includes: ✅ Bias ✅
2
33
123
@victorveitch
Victor Veitch 🔸
2 years
LLM best-of-n sampling works great in practice---but why? Turns out: it's the best possible policy for maximizing win rate over the base model! Then: we use this to get a truly sweet alignment scheme: easy tweaks, huge gains w @ybnbxb @ggarbacea https://t.co/GHaP911Lil
5
20
82
@jacyanthis
Jacy Reese Anthis
2 years
In our new working paper "Dubious Debiasing" presented in today's #CHI2024 HEAL workshop, we argue that modern LLMs like ChatGPT cannot be fair in the ways currently conceived in ML/NLP. We need new context-adaptive methods to tackle the evaluation crisis: https://t.co/4NVaT4CZS2
1
8
23
@ArthurGretton
Arthur Gretton
2 years
Proxy methods: not just for causal effect estimation! #aistats24 Adapt to domain shifts in an unobserved latent, with either -concept variables -multiple training domains https://t.co/pPbg5Dk6tz Tsai @stephenpfohl @walesalaudeen96 @nicole_chiou Kusner @alexdamour @sanmikoyejo
0
14
40
@XiaohuaZhai
Xiaohua Zhai
2 years
📢📢 I am looking for a student researcher to work with me and my colleagues at Google DeepMind Zürich on vision-language research. It will be a 100% 24 weeks onsite position in Switzerland. Reach out to me (xzhai@google.com) if interested. Bonus: amazing view🏔️👇
6
29
243
@ibomohsin
Ibrahim Alabdulmohsin | إبراهيم العبدالمحسن
2 years
Excited to share our #ICLR2024 paper, focused on reducing bias in CLIP models. We study the impact of data balancing and come up with some recommendations for how to apply it effectively. Surprising insights included! Here are 3 main takeaways.
3
23
87
@RJenatton
rodolphe_jenatton
2 years
In case you missed it @bioptimus_ai: We are looking for the best talents (ML/biology/large-scale infrastructure) to join our fantastic technical team @ZeldaMariet @FelipeLlinares @jeanphi_vert 👉
Tweet card summary image
bioptimus.com
We are building a team of creative minds. Together, we aspire to redefine the landscape of biology with AI, unlocking its potential for everyone.
0
10
19
@KLdivergence
Kristian Lum
2 years
The band is getting back together! Tomorrow, I’m joining @wsisaac and so many others I admire on @Google DeepMind’s Ethics team to work on AI evaluation. Exciting times ahead…
20
3
190
@zicokolter
Zico Kolter
2 years
The @icmlconf 2024 Ethics Chairs, @KLdivergence and @DrLaurenOR, wrote a blog about the ethics review. Helpful for all authors and reviewers at ICML to better understand the process! https://t.co/OnkunUcmCg . . .
Tweet card summary image
medium.com
This post is written by the ICML 2024 ethics chairs: Kristian Lum and Lauren Oakden-Rayner.
0
11
23
@victorveitch
Victor Veitch 🔸
2 years
This one weird* trick will fix all** your LLM RLHF issues! * not weird ** as long as your issues are about how to combine multiple objectives, and avoid reward hacking
@wzihao12
Zihao Wang
2 years
Transforming the reward used in RLHF gives big wins in LLM alignment and makes it easy to combine multiple reward functions! https://t.co/o1PvnpkxrL @nagpalchirag @JonathanBerant @jacobeisenstein @alexdamour @sanmikoyejo @victorveitch @GoogleDeepMind
0
5
20