Mathieu Dagréou Profile
Mathieu Dagréou

@Mat_Dag

Followers
493
Following
1K
Media
10
Statuses
382

Ph.D. student in at @Inria_Saclay working on Optimization and Machine Learning @matdag.bsky.social

Paris, France
Joined July 2019
Don't wanna be here? Send us removal request.
@Mat_Dag
Mathieu Dagréou
2 years
📣📣 Preprint alert 📣📣. « A Lower Bound and a Near-Optimal Algorithm for Bilevel Empirical Risk Minimization ». w. @tomamoral, @vaiter & @PierreAblin. 1/3.
Tweet card summary image
arxiv.org
Bilevel optimization problems, which are problems where two optimization problems are nested, have more and more applications in machine learning. In many practical cases, the upper and the lower...
2
15
43
@Mat_Dag
Mathieu Dagréou
8 days
RT @rdMorel: For evolving unknown PDEs, ML models are trained on next-state prediction. But do they actually learn the time dynamics: the "….
0
49
0
@Mat_Dag
Mathieu Dagréou
21 days
RT @mblondel_ml: Back from MLSS Senegal 🇸🇳, where I had the honor of giving lectures on differentiable programming. Really grateful for all….
Tweet card summary image
github.com
Slides for the book "The Elements of Differentiable Programming". - diffprog/slides
0
20
0
@Mat_Dag
Mathieu Dagréou
1 month
RT @wazizian: ❓ How long does SGD take to reach the global minimum on non-convex functions?. With @FranckIutzeler, J. Malick, P. Mertikopou….
0
72
0
@Mat_Dag
Mathieu Dagréou
1 month
RT @konstmish: I want to address one very common misconception about optimization. I often hear that (approximately) preconditioning with t….
0
16
0
@Mat_Dag
Mathieu Dagréou
1 month
RT @MatthieuTerris: 🧵 I'll be at CVPR next week presenting our FiRe work 🔥. TL;DR: We go beyond denoising models in PnP with more general r….
0
5
0
@Mat_Dag
Mathieu Dagréou
2 months
RT @vaiter: 📣 New preprint 📣 . **Differentiable Generalized Sliced Wasserstein Plans**. w/.L. Chapel.@rtavenar . We propose a Generalized….
0
5
0
@Mat_Dag
Mathieu Dagréou
3 months
RT @mathusmassias: It was received quite enthusiastically here so time to share it again!!! . Our #ICLR2025 blog post on Flow M atching wa….
0
5
0
@Mat_Dag
Mathieu Dagréou
5 months
RT @gabrielpeyre: Optimization algorithms come with many flavors depending on the structure of the problem. Smooth vs non-smooth, convex vs….
0
104
0
@Mat_Dag
Mathieu Dagréou
5 months
RT @haeggee: A really fun project to work on. Looking at these plots side-by-side still amazes me! How well can **convex optimization theor….
0
5
0
@Mat_Dag
Mathieu Dagréou
6 months
RT @FSchaipp: Learning rate schedules seem mysterious?.Turns out that their behaviour can be described with a bound from *convex, nonsmooth….
Tweet card summary image
arxiv.org
We show that learning-rate schedules for large model training behave surprisingly similar to a performance bound from non-smooth convex optimization theory. We provide a bound for the constant...
0
26
0
@Mat_Dag
Mathieu Dagréou
6 months
RT @konstmish: Learning rate schedulers used to be a big mistery. Now you can just take a guarantee for *convex non-smooth* problems (from….
0
76
0
@Mat_Dag
Mathieu Dagréou
6 months
RT @theo_uscidda: Our work on geometric disentangled representation learning has been accepted to ICLR 2025! 🎊See you in Singapore if you w….
0
18
0
@Mat_Dag
Mathieu Dagréou
6 months
RT @gabrielpeyre: The Mathematics of Artificial Intelligence: In this introductory and highly subjective survey, aimed at a general mathema….
0
438
0
@Mat_Dag
Mathieu Dagréou
7 months
RT @BachFrancis: My book is (at last) out, just in time for Christmas!.A blog post to celebrate and present it: htt….
0
318
0
@Mat_Dag
Mathieu Dagréou
7 months
RT @vaiter: When optimization problems have multiple minima, algorithms favor specific solutions due to their implicit bias. For ordinary l….
0
51
0
@Mat_Dag
Mathieu Dagréou
7 months
RT @PierreAblin: 🍏🍏🍏 Come work with us at Apple Machine Learning Research! 🍏🍏🍏. Our team focuses on curiosity-based, open research. We wor….
0
36
0
@Mat_Dag
Mathieu Dagréou
7 months
RT @inria_paris: 🏆 #Distinction | Toutes nos félicitations à @gerardbiau (Centre @Inria @Sorbonne_Univ_), directeur de #SCAI et spécialiste….
0
8
0
@Mat_Dag
Mathieu Dagréou
7 months
RT @vaiter: There exists f:[0,1]→[0,1] strictly increasing, continuous function such that its derivative is 0 almost everywhere. https://t.….
0
42
0
@Mat_Dag
Mathieu Dagréou
7 months
RT @theo_uscidda: Curious about the potential of optimal transport (OT) in representation learning? Join @CuturiMarco's talk at the UniReps….
0
28
0