Constantinos Daskalakis Profile
Constantinos Daskalakis

@KonstDaskalakis

Followers
7K
Following
561
Media
5
Statuses
125

Scientist. Computer Science Professor @MIT. Studying Computation, and using it as a lens to study Game Theory, Economics, and Machine Intelligence.

Cambridge, MA
Joined November 2018
Don't wanna be here? Send us removal request.
@KonstDaskalakis
Constantinos Daskalakis
5 years
Min-max optimization (used among other applications in GANs, and adversarial training more broadly) is empirically challenging. We show why min-max optimization is hard in the following paper with Stratis Skoulakis and Manolis Zampetakis: https://t.co/ATgkHqeo2r
3
68
371
@Aaroth
Aaron Roth
3 days
If you work at the intersection of CS and economics (or think your work is of interest to those who do!) consider submitting to the ESIF Economics and AI+ML meeting this summer at Cornell:
econometricsociety.org
2026 ESIF Economics and AI+ML Meeting (ESIF-AIML2026) June 16-17, 2026 Cornell University Department...
2
35
126
@apjacob03
Athul Paul Jacob
1 month
Today marks an important milestone. I’m launching Percepta together with @htaneja, @hirshjain, @tmathew0309, Radha Jain, @marisbest2, @KonstDaskalakis and an incredible team, with the goal of bringing AI to the core industries that run our economy. For AI to deliver
percepta.ai
Transforming critical institutions using applied AI. Let's harness the frontier.
10
20
82
@KonstDaskalakis
Constantinos Daskalakis
8 months
💪
@EugeneVinitsky
Eugene Vinitsky 🦋
8 months
Hiring researchers and engineers for a stealth, applied research company with a focus on RL x foundation models. Folks on the team already are leading RL / learning researchers. If you think you'd be good at the research needed to get things working in practice, email me
0
0
19
@EugeneVinitsky
Eugene Vinitsky 🦋
8 months
Hiring researchers and engineers for a stealth, applied research company with a focus on RL x foundation models. Folks on the team already are leading RL / learning researchers. If you think you'd be good at the research needed to get things working in practice, email me
10
34
519
@cmcaram
Constantine Caramanis
10 months
🚀 🇬🇷 A year in the making! I’ve just completed a set of 21 lectures in Machine Learning, in Greek, designed for high school students. The course introduces key ML concepts, coding in Python & PyTorch, and real-world AI applications. #MachineLearning #AI #EdTech #Greece
4
18
90
@KonstDaskalakis
Constantinos Daskalakis
11 months
Proud of my brother's research!
@NikosDaskalakis
SysBioStress - Daskalakis Lab
11 months
Very proud and energized for 2025 that our paper on 🧠 molecular pathology in #PTSD and #MDD made it in #NIH Director’s 2024 #ScienceHighlights!
0
4
37
@NikosDaskalakis
SysBioStress - Daskalakis Lab
1 year
💡Daskalakis & Ressler labs @McLeanHospital @harvardmed are extremely excited & proud 🥳to share that our collaborative work with Nemeroff @UTAustin and Kleinman @LieberInstitute labs on molecular underpinnings of #PTSD & #MDD is out @ScienceMagazine https://t.co/pcQXbtsXkv! 🧵👇
Tweet card summary image
science.org
The molecular pathology of stress-related disorders remains elusive. Our brain multiregion, multiomic study of posttraumatic stress disorder (PTSD) and major depressive disorder (MDD) included the...
7
38
100
@KonstDaskalakis
Constantinos Daskalakis
2 years
Can you train a generative model using only noisy data? If you can, this would alleviate the issue of training data memorization plaguing certain genAI models. In exciting work with @giannis_daras and @AlexGDimakis we show how to do this for diffusion-based generative models.
@giannis_daras
Giannis Daras
2 years
Consistent Diffusion Meets Tweedie. Our latest paper introduces an exact framework to train/finetune diffusion models like Stable Diffusion XL solely with noisy data. A year's worth of work breakthrough in reducing memorization and its implications on copyright 🧵
0
11
69
@sxysun1
sxysun ⚡️🤖
2 years
Stoked to announce an Agentic Markets workshop @agenticmarkets at #ICML 2024! @icmlconf 📇 Details: https://t.co/wOdgjzmXOb ✏️ Call for papers: due May 17th 📅 Conference: July 26/27th featuring GOAT speakers like Tuomas Sandholm, Gillian Hadfield, @drimgemp @KonstDaskalakis
6
20
67
@JTasioulas
John Tasioulas
2 years
The call for papers is coming soon… Delighted to confirm the participation of the Greek Prime Minister @kmitsotakis and the chair of Greece’s high level AI committee @KonstDaskalakis at this event. June 20th in Athens 🇬🇷
@JTasioulas
John Tasioulas
2 years
Plans are afoot to hold a workshop on Aristotelian approaches to AI ethics in Athens this summer (second half of June). This is an event I’m co-organising with Prof Josiah Ober and generously supported by @PJMFnd. We are especially keen on involving younger scholars. More soon.
1
13
33
@KonstDaskalakis
Constantinos Daskalakis
2 years
Note that concurrent work by Peng & Rubinstein establishes a similar set of results:
0
0
2
@KonstDaskalakis
Constantinos Daskalakis
2 years
At a technical level, our upper bound employs a tree structure on a collection of external regret learners. Each external regret learner updates its action distribution only sporadically, allowing us to avoid polynomial dependence on the number of experts.
1
0
2
@KonstDaskalakis
Constantinos Daskalakis
2 years
The number of rounds in our upper bound depends exponentially on the inverse of the approximation parameter. We present a lower bound showing that this dependence is necessary, even when the adversary is oblivious.
1
0
1
@KonstDaskalakis
Constantinos Daskalakis
2 years
They also provide algorithms for computing low-rank and sparse correlated equilibria.
1
0
1
@KonstDaskalakis
Constantinos Daskalakis
2 years
As a consequence of our main results and the standard connection between no-swap regret learning and correlated equilibria, we obtain a polynomial-time algorithm for finding an approximate correlated equilibrium in extensive-form games, for constant approximation parameter.
1
0
1
@KonstDaskalakis
Constantinos Daskalakis
2 years
More generally, we obtain that any class of finite Littlestone dimension (or seq fat shattering dimension) has a no swap regret learner. We also derive an extension to the bandit setting, for which we show a nearly-optimal swap regret bound for large action spaces.
1
0
1
@KonstDaskalakis
Constantinos Daskalakis
2 years
We show that no-swap regret learning is possible for any function class which has a no-external regret learner. As an example, in the setting of learning with expert advice with N experts, only polylog(N) rounds are needed to obtain swap regret bounded by an arbitrary constant.
1
0
1
@KonstDaskalakis
Constantinos Daskalakis
2 years
Specifically, we ask: can one improve upon classical bounds by Blum-Mansour and Stolz-Lugosi which suffer linear dependence on the #actions to achieve sublinear swap regret/correlated equilibrium computation with poly-logarithmic dependence on the #actions instead?
1
0
1