
Dimitri Meunier
@DimitriMeunier1
Followers
461
Following
819
Media
3
Statuses
159
RT @FannyYangETH: Last call to register at for the Math for Trustworthy ML workshop in Switzerland with the possibi….
0
11
0
RT @RichardSSutton: The tradition of tea time talks started long, long ago, and came to Alberta from the Gatsby unit (Neuroscience) at Univ….
0
8
0
RT @gaussianmeasure: I’ll be speaking at ENUMATH conference (Sep 1st - Sep 5th) in Heidelberg at the “Approximation Theory meets Statistica….
0
4
0
RT @jkbhagatio: This has been a long time coming! Really happy to announce Aeon, the culmination of my main Ph. D. work! A true everything….
0
4
0
RT @StatMEPapers: Dimitri Meunier, Antoine Moulin, Jakub Wornbard, Vladimir R. Kostic, Arthur Gretton. [. Demystify….
arxiv.org
We address the problem of causal effect estimation in the presence of hidden confounders, using nonparametric instrumental variable (IV) regression. A leading strategy employs spectral features -...
0
3
0
Very much looking forward to this ! 🙌 Stellar line-up.
Announcing : The 2nd International Summer School on Mathematical Aspects of Data Science.EPFL, Sept 1–5, 2025. Speakers:.Bach (@BachFrancis).Bandeira.Mallat.Montanari (@Andrea__M).Peyré (@gabrielpeyre). For PhD students & early-career researchers.Application deadline: May 15.
3
1
4
RT @Hudson19990518: New paper on Stationary MMD points 📣 . 1️⃣ Samples generated by MMD flow exhibit 'super-converg….
0
10
0
RT @pie_novelli: New preprint out on arXiv: "Self-Supervised Evolution Operator Learning for High-Dimensional Dynamical Systems"!. Read it….
0
3
0
RT @neu_rips: wanna know how to do inverse Q-learning right? read this paper then!!.joint work with the best team of students ever ♥️.
0
1
0
RT @antoine_mln: new preprint with the amazing @LucaViano4 and @neu_rips on offline imitation learning!. when the expert is hard to represe….
0
6
0
TL;DR:. ✅ Theoretical guarantees for nonlinear meta-learning.✅ Explains when and how aggregation helps.✅ Connects RKHS regression, subspace estimation & meta-learning. Co-led with @lzy_michael 🙌, with invaluable support from @ArthurGretton, Samory Kpotufe.
0
2
3
🚨 New paper accepted at SIMODS! 🚨.“Nonlinear Meta-learning Can Guarantee Faster Rates”. When does meta learning work? Spoiler: generalise to new tasks by overfitting on your training tasks!. Here is why: .🧵👇.
arxiv.org
Many recent theoretical works on \emph{meta-learning} aim to achieve guarantees in leveraging similar representational structures from related tasks towards simplifying a target task. The main aim...
8
15
67
RT @moskitos_bite: Check out our new result on regression with heavy-tailed noise ! . Thanks to. @gaussianmeasure , @DimitriMeunier1 , .@A….
arxiv.org
This paper examines the performance of ridge regression in reproducing kernel Hilbert spaces in the presence of noise that exhibits a finite number of higher moments. We establish excess risk...
0
5
0
RT @Chau9991: 🧠 How do we compare uncertainties that are themselves imprecisely specified?. 💡Meet IIPM (Integral IMPRECISE probability metr….
0
11
0