Foundations of Cooperative AI Lab (FOCAL) at CMU Profile
Foundations of Cooperative AI Lab (FOCAL) at CMU

@FOCAL_lab

Followers
208
Following
88
Media
0
Statuses
83

Foundations of Cooperative AI Lab @CarnegieMellon. Creating foundations of game theory for advanced AI w/ focus on achieving cooperation. Directed by @conitzer.

Pittsburgh, PA, USA
Joined January 2022
Don't wanna be here? Send us removal request.
@FOCAL_lab
Foundations of Cooperative AI Lab (FOCAL) at CMU
2 years
New positions at FOCAL!
@conitzer
Vincent Conitzer
2 years
We are recruiting postdocs at the Foundations of Cooperative AI Lab (@FOCAL_lab) at @CarnegieMellon ( https://t.co/A72rHG8P87)! Please retweet / share / send great applicants our way! For different positions please reach out. @SCSatCMU @CSDatCMU @mldcmu https://t.co/cY2yIeDXRb
0
0
0
@conitzer
Vincent Conitzer
12 days
Excited to present our position paper in the 4:30pm @NeurIPSConf poster session today! We discuss the problem that AI systems may suspect they're being tested and act differently as a result, & how to approach this with game theory. https://t.co/78YaTn9amo https://t.co/JIPUm5sGb5
Tweet card summary image
arxiv.org
This position paper argues for two claims regarding AI testing and evaluation. First, to remain informative about deployment behaviour, evaluations need account for the possibility that AI systems...
0
4
11
@conitzer
Vincent Conitzer
13 days
New paper from our lab, led by UG Xander Heckett: a game-theoretic view of AI safety via debate, where two agents argue for different courses of action and we try to set the rules so that the one that is right will win. https://t.co/ynecIVEBZE
Tweet card summary image
arxiv.org
We consider settings where an uninformed principal must hear arguments from two better-informed agents, corresponding to two possible courses of action that they argue for. The arguments are...
1
5
21
@conitzer
Vincent Conitzer
21 days
(continuing) I worry too many people think of today's LLM-based chatbots as a good model for studying AGI. Even if they're on the path to AGI, they're not at all what AGI would be like. Here's a chapter I wrote about this recently. https://t.co/EaJF1Z7YAZ
1
1
4
@conitzer
Vincent Conitzer
2 months
article on Nash (a CMU alum!) with a mention of our @FOCAL_lab and the Nash75 conference at Oxford as well https://t.co/dv3Cz1jshb
Tweet card summary image
cmu.edu
Mathematician and Carnegie Mellon University alumnus John F. Nash, Jr. published the Nash Equilibrium 75 years ago.
0
1
4
@coop_ai
Cooperative AI Foundation
3 months
‘Foundations and Frontiers of Cooperative AI’ - with @conitzer (@CarnegieMellon) -
1
2
3
@conitzer
Vincent Conitzer
3 months
Excited that our paper "AI Testing Should Account for Sophisticated Strategic Behaviour" was accepted to the first NeurIPS position paper track! We argue that AI systems may act strategically w.r.t. the possibility they are currently being tested. https://t.co/78YaTn9amo
Tweet card summary image
arxiv.org
This position paper argues for two claims regarding AI testing and evaluation. First, to remain informative about deployment behaviour, evaluations need account for the possibility that AI systems...
0
4
13
@conitzer
Vincent Conitzer
3 months
I got to give the opening lecture for the 2025 Cooperative AI Summer School and the video is now online! "Foundations and Frontiers of Cooperative AI" https://t.co/kxFQE14OV3 @coop_ai
0
2
6
@conitzer
Vincent Conitzer
3 months
Congratulations to Emin Berker for receiving the Best Talk Award at COMSOC'25! (in the picture with coauthor Edith Elkind; paper https://t.co/IHPwkCdtHD )
0
1
7
@conitzer
Vincent Conitzer
3 months
Tomorrow at COMSOC, Emin Berker will present our paper on creating clone candidates & strategic nomination (e.g., does the leaderboard rule on Chatbot Arena create incentives to submit/withhold slightly differently fine-tuned versions of the same model?).
Tweet card summary image
arxiv.org
We study two axioms for social choice functions that capture the impact of similar candidates: independence of clones (IoC) and composition consistency (CC). We clarify the relationship between...
0
1
4
@conitzer
Vincent Conitzer
4 months
short paper by @Yoshua_Bengio and me about what LLMs tell us about our own mental lives (extended from earlier blog post) https://t.co/duAzKJBHkt
Tweet card summary image
philpapers.org
1
2
8
@conitzer
Vincent Conitzer
5 months
(1/n) I noticed that on the IMO where several LLM-based systems just got gold medals, the fifth problem (the last one solved by Gemini Deep Think) was a game problem, so I had no excuse :-) not to try it myself first and then see how the solutions compared.
1
1
12
@conitzer
Vincent Conitzer
5 months
The Nash75 talks are on YouTube! Below is my talk "Game Theory for AI Agents" (link also gives all other talks on the side). https://t.co/9e3XNOG3Ft
0
2
14
@conitzer
Vincent Conitzer
5 months
(2/2) Poster Thu 16:30 Observation Interference in Partially Observable Assistance Games (led by Scott Emmons and @C_Oesterheld) -- a model of the human-AI value alignment problem which allows the human and the AI assistant to have partial observations. https://t.co/wINhEzr81F
Tweet card summary image
arxiv.org
We study partially observable assistance games (POAGs), a model of the human-AI value alignment problem which allows the human and the AI assistant to have partial observations. Motivated by...
0
1
3
@conitzer
Vincent Conitzer
5 months
(1/2) Two papers with authors from our @FOCAL_lab being presented at ICML @icmlconf in the next two days: Oral Wed 16:15 & Poster Wed 16:30 Expected Variational Inequalities (led by Brian Zhang and Ioannis Anagnostides). https://t.co/3cTN7rQ1IT
Tweet card summary image
arxiv.org
Variational inequalities (VIs) encompass many fundamental problems in diverse areas ranging from engineering to economics and machine learning. However, their considerable expressivity comes at...
1
1
4
@conitzer
Vincent Conitzer
5 months
Want to know which game is better without making equilibrium selection assumptions? Caspar Oesterheld (@C_Oesterheld) is about to present our @FOCAL_lab paper at TARK, "Choosing What Game to Play Without Selecting Equilibria"! https://t.co/pIWdi3rcDi
0
1
4
@conitzer
Vincent Conitzer
5 months
A fantastic group of young researchers at the Cooperative AI Summer School, bodes well for the future of cooperative AI!
@coop_ai
Cooperative AI Foundation
5 months
Wrapping up a great first day of our Cooperative AI Summer School talks featuring: ❇️ Lewis Hammond @lrhammond: Opening ❇️ Vincent Conitzer @conitzer: Foundations and Frontiers of Cooperative AI ❇️ Audrey Tang @audreyt & Zarinah Agnew @zarinahagnew: Empowering Collective
0
2
12
@conitzer
Vincent Conitzer
5 months
(2/2) From Independence of Clones to Composition Consistency: A Hierarchy of Barriers to Strategic Nomination W 1:30pm session Led by Emin Berker&relevant to evaluating AI, where it is easy to create very similar variants of a given AI system ("clones"). https://t.co/IHPwkCdtHD
Tweet card summary image
arxiv.org
We study two axioms for social choice functions that capture the impact of similar candidates: independence of clones (IoC) and composition consistency (CC). We clarify the relationship between...
0
1
4
@conitzer
Vincent Conitzer
5 months
(1/2) @AcmSIGecom EC'25 has started! Two papers with coauthors from our @FOCAL_lab: Learning and Computation of Phi-Equilibria at the Frontier of Tractability W 8:30am session Led by Brian Zhang and Ioannis Anagnostides https://t.co/B0EkBT8y9G
Tweet card summary image
arxiv.org
$Φ$-equilibria -- and the associated notion of $Φ$-regret -- are a powerful and flexible framework at the heart of online learning and game theory, whereby enriching the set of deviations...
1
1
7
@conitzer
Vincent Conitzer
6 months
Looking forward to the 75 years of Nash equilibrium conference! I'll speak on game theory for AI agents. https://t.co/s34w1dXxP9
Tweet card summary image
mfo.ac.uk
A symposium at Oxford to celebrate 75 years of Nash equilibrium
0
2
10
@jankulveit
Jan Kulveit
6 months
Human-aligned AI Summer School @humanalignedai has an updated list of speakers @FazlBarez (Oxford) @lrhammond (@coop_ai) @EvanHub (Anthropic) @g_leech_ (@LeverhulmeCFI) Nathaniel Sauerberg (@FOCAL_lab) @noahysiegel (@GoogleDeepMind) @stanislavfort and Torben Swoboda. Apply ~now
1
9
51