Constantinos Daskalakis Profile
Constantinos Daskalakis

@KonstDaskalakis

Followers
7K
Following
550
Media
5
Statuses
123

Scientist. Computer Science Professor @MIT. Studying Computation, and using it as a lens to study Game Theory, Economics, and Machine Intelligence.

Cambridge, MA
Joined November 2018
Don't wanna be here? Send us removal request.
@KonstDaskalakis
Constantinos Daskalakis
5 years
Min-max optimization (used among other applications in GANs, and adversarial training more broadly) is empirically challenging. We show why min-max optimization is hard in the following paper with Stratis Skoulakis and Manolis Zampetakis:.
3
68
374
@KonstDaskalakis
Constantinos Daskalakis
5 months
💪.
@EugeneVinitsky
Eugene Vinitsky 🍒🦋
5 months
Hiring researchers and engineers for a stealth, applied research company with a focus on RL x foundation models. Folks on the team already are leading RL / learning researchers. If you think you'd be good at the research needed to get things working in practice, email me.
0
0
20
@KonstDaskalakis
Constantinos Daskalakis
5 months
RT @EugeneVinitsky: Hiring researchers and engineers for a stealth, applied research company with a focus on RL x foundation models. Folks….
0
34
0
@KonstDaskalakis
Constantinos Daskalakis
6 months
RT @cmcaram: 🚀 🇬🇷 A year in the making! I’ve just completed a set of 21 lectures in Machine Learning, in Greek, designed for high school st….
0
18
0
@KonstDaskalakis
Constantinos Daskalakis
7 months
Proud of my brother's research!.
@NikosDaskalakis
SysBioStress - Daskalakis Lab
7 months
Very proud and energized for 2025 that our paper on 🧠 molecular pathology in #PTSD and #MDD made it in #NIH Director’s 2024 #ScienceHighlights!.
0
4
37
@KonstDaskalakis
Constantinos Daskalakis
1 year
Can you train a generative model using only noisy data? If you can, this would alleviate the issue of training data memorization plaguing certain genAI models. In exciting work with @giannis_daras and @AlexGDimakis we show how to do this for diffusion-based generative models.
@giannis_daras
Giannis Daras
1 year
Consistent Diffusion Meets Tweedie. Our latest paper introduces an exact framework to train/finetune diffusion models like Stable Diffusion XL solely with noisy data. A year's worth of work breakthrough in reducing memorization and its implications on copyright 🧵
Tweet media one
0
11
69
@KonstDaskalakis
Constantinos Daskalakis
1 year
RT @sxysun1: Stoked to announce an Agentic Markets workshop @agenticmarkets at #ICML 2024! @icmlconf. 📇 Details: ✏️….
0
20
0
@KonstDaskalakis
Constantinos Daskalakis
1 year
RT @JTasioulas: The call for papers is coming soon… Delighted to confirm the participation of the Greek Prime Minister @kmitsotakis and the….
0
13
0
@KonstDaskalakis
Constantinos Daskalakis
2 years
Note that concurrent work by Peng & Rubinstein establishes a similar set of results:
0
0
1
@KonstDaskalakis
Constantinos Daskalakis
2 years
At a technical level, our upper bound employs a tree structure on a collection of external regret learners. Each external regret learner updates its action distribution only sporadically, allowing us to avoid polynomial dependence on the number of experts.
1
0
2
@KonstDaskalakis
Constantinos Daskalakis
2 years
The number of rounds in our upper bound depends exponentially on the inverse of the approximation parameter. We present a lower bound showing that this dependence is necessary, even when the adversary is oblivious.
1
0
1
@KonstDaskalakis
Constantinos Daskalakis
2 years
They also provide algorithms for computing low-rank and sparse correlated equilibria.
1
0
1
@KonstDaskalakis
Constantinos Daskalakis
2 years
As a consequence of our main results and the standard connection between no-swap regret learning and correlated equilibria, we obtain a polynomial-time algorithm for finding an approximate correlated equilibrium in extensive-form games, for constant approximation parameter.
1
0
1
@KonstDaskalakis
Constantinos Daskalakis
2 years
More generally, we obtain that any class of finite Littlestone dimension (or seq fat shattering dimension) has a no swap regret learner. We also derive an extension to the bandit setting, for which we show a nearly-optimal swap regret bound for large action spaces.
1
0
1
@KonstDaskalakis
Constantinos Daskalakis
2 years
We show that no-swap regret learning is possible for any function class which has a no-external regret learner. As an example, in the setting of learning with expert advice with N experts, only polylog(N) rounds are needed to obtain swap regret bounded by an arbitrary constant.
1
0
1
@KonstDaskalakis
Constantinos Daskalakis
2 years
Specifically, we ask: can one improve upon classical bounds by Blum-Mansour and Stolz-Lugosi which suffer linear dependence on the #actions to achieve sublinear swap regret/correlated equilibrium computation with poly-logarithmic dependence on the #actions instead?.
1
0
1
@KonstDaskalakis
Constantinos Daskalakis
2 years
Exciting work w/ @YuvalDagan3 @MFishelson @GolowichNoah on efficient algos for no-swap regret learning and, relatedly, correlated eq when the #actions is exponentially large/infinite. While classical works point in the opposite direction, we show that this is actually possible!.
1
9
58
@KonstDaskalakis
Constantinos Daskalakis
2 years
RT @AlexGDimakis: Athens has a lot of talented AI researchers and developers.
0
5
0