Constantinos Daskalakis
@KonstDaskalakis
Followers
7K
Following
561
Media
5
Statuses
125
Scientist. Computer Science Professor @MIT. Studying Computation, and using it as a lens to study Game Theory, Economics, and Machine Intelligence.
Cambridge, MA
Joined November 2018
Min-max optimization (used among other applications in GANs, and adversarial training more broadly) is empirically challenging. We show why min-max optimization is hard in the following paper with Stratis Skoulakis and Manolis Zampetakis: https://t.co/ATgkHqeo2r
3
68
371
If you work at the intersection of CS and economics (or think your work is of interest to those who do!) consider submitting to the ESIF Economics and AI+ML meeting this summer at Cornell:
econometricsociety.org
2026 ESIF Economics and AI+ML Meeting (ESIF-AIML2026) June 16-17, 2026 Cornell University Department...
2
35
126
Today marks an important milestone. I’m launching Percepta together with @htaneja, @hirshjain, @tmathew0309, Radha Jain, @marisbest2, @KonstDaskalakis and an incredible team, with the goal of bringing AI to the core industries that run our economy. For AI to deliver
percepta.ai
Transforming critical institutions using applied AI. Let's harness the frontier.
10
20
82
Hiring researchers and engineers for a stealth, applied research company with a focus on RL x foundation models. Folks on the team already are leading RL / learning researchers. If you think you'd be good at the research needed to get things working in practice, email me
10
34
519
🚀 🇬🇷 A year in the making! I’ve just completed a set of 21 lectures in Machine Learning, in Greek, designed for high school students. The course introduces key ML concepts, coding in Python & PyTorch, and real-world AI applications. #MachineLearning #AI #EdTech #Greece
4
18
90
Proud of my brother's research!
Very proud and energized for 2025 that our paper on 🧠 molecular pathology in #PTSD and #MDD made it in #NIH Director’s 2024 #ScienceHighlights!
0
4
37
Looking forward to this event on Thursday!
kathimerini.gr
Δύο από τα σημαντικότερα διεθνώς πανεπιστήμια, η Οξφόρδη και το Στάνφορντ «έρχονται» στην Αθήνα και σε συνεργασία με το με το Εθνικό Κέντρο Έρευνας Φυσικών Επιστημών «Δημόκριτος» και το World Human...
June 20th in Athens 🇬🇷- the Lyceum Project: AI Ethics with Aristotle. With speakers including @alondra, @KonstDaskalakis, @yuvalshany1, @kmitsotakis, Josiah Ober, @mbrendan1, and @FotiniChristia. Places are rapidly filling up. Register now. https://t.co/pIKLE9YQZr
0
4
32
💡Daskalakis & Ressler labs @McLeanHospital @harvardmed are extremely excited & proud 🥳to share that our collaborative work with Nemeroff @UTAustin and Kleinman @LieberInstitute labs on molecular underpinnings of #PTSD & #MDD is out @ScienceMagazine
https://t.co/pcQXbtsXkv! 🧵👇
science.org
The molecular pathology of stress-related disorders remains elusive. Our brain multiregion, multiomic study of posttraumatic stress disorder (PTSD) and major depressive disorder (MDD) included the...
7
38
100
Can you train a generative model using only noisy data? If you can, this would alleviate the issue of training data memorization plaguing certain genAI models. In exciting work with @giannis_daras and @AlexGDimakis we show how to do this for diffusion-based generative models.
Consistent Diffusion Meets Tweedie. Our latest paper introduces an exact framework to train/finetune diffusion models like Stable Diffusion XL solely with noisy data. A year's worth of work breakthrough in reducing memorization and its implications on copyright 🧵
0
11
69
Stoked to announce an Agentic Markets workshop @agenticmarkets at #ICML 2024! @icmlconf 📇 Details: https://t.co/wOdgjzmXOb ✏️ Call for papers: due May 17th 📅 Conference: July 26/27th featuring GOAT speakers like Tuomas Sandholm, Gillian Hadfield, @drimgemp @KonstDaskalakis
6
20
67
The call for papers is coming soon… Delighted to confirm the participation of the Greek Prime Minister @kmitsotakis and the chair of Greece’s high level AI committee @KonstDaskalakis at this event. June 20th in Athens 🇬🇷
Plans are afoot to hold a workshop on Aristotelian approaches to AI ethics in Athens this summer (second half of June). This is an event I’m co-organising with Prof Josiah Ober and generously supported by @PJMFnd. We are especially keen on involving younger scholars. More soon.
1
13
33
Note that concurrent work by Peng & Rubinstein establishes a similar set of results:
0
0
2
At a technical level, our upper bound employs a tree structure on a collection of external regret learners. Each external regret learner updates its action distribution only sporadically, allowing us to avoid polynomial dependence on the number of experts.
1
0
2
The number of rounds in our upper bound depends exponentially on the inverse of the approximation parameter. We present a lower bound showing that this dependence is necessary, even when the adversary is oblivious.
1
0
1
They also provide algorithms for computing low-rank and sparse correlated equilibria.
1
0
1
As a consequence of our main results and the standard connection between no-swap regret learning and correlated equilibria, we obtain a polynomial-time algorithm for finding an approximate correlated equilibrium in extensive-form games, for constant approximation parameter.
1
0
1
More generally, we obtain that any class of finite Littlestone dimension (or seq fat shattering dimension) has a no swap regret learner. We also derive an extension to the bandit setting, for which we show a nearly-optimal swap regret bound for large action spaces.
1
0
1
We show that no-swap regret learning is possible for any function class which has a no-external regret learner. As an example, in the setting of learning with expert advice with N experts, only polylog(N) rounds are needed to obtain swap regret bounded by an arbitrary constant.
1
0
1