DimitriMeunier1 Profile Banner
Dimitri Meunier Profile
Dimitri Meunier

@DimitriMeunier1

Followers
461
Following
819
Media
3
Statuses
159

PhD @GatsbyUCL

Joined October 2019
Don't wanna be here? Send us removal request.
@DimitriMeunier1
Dimitri Meunier
1 day
RT @FannyYangETH: Last call to register at for the Math for Trustworthy ML workshop in Switzerland with the possibi….
0
11
0
@DimitriMeunier1
Dimitri Meunier
2 days
RT @RichardSSutton: The tradition of tea time talks started long, long ago, and came to Alberta from the Gatsby unit (Neuroscience) at Univ….
0
8
0
@DimitriMeunier1
Dimitri Meunier
15 days
RT @gaussianmeasure: I’ll be speaking at ENUMATH conference (Sep 1st - Sep 5th) in Heidelberg at the “Approximation Theory meets Statistica….
0
4
0
@DimitriMeunier1
Dimitri Meunier
24 days
RT @jkbhagatio: This has been a long time coming! Really happy to announce Aeon, the culmination of my main Ph. D. work! A true everything….
0
4
0
@DimitriMeunier1
Dimitri Meunier
3 months
Very much looking forward to this ! 🙌 Stellar line-up.
@LenaicChizat
Lénaïc Chizat
5 months
Announcing : The 2nd International Summer School on Mathematical Aspects of Data Science.EPFL, Sept 1–5, 2025. Speakers:.Bach (@BachFrancis).Bandeira.Mallat.Montanari (@Andrea__M).Peyré (@gabrielpeyre). For PhD students & early-career researchers.Application deadline: May 15.
3
1
4
@DimitriMeunier1
Dimitri Meunier
3 months
RT @Hudson19990518: New paper on Stationary MMD points 📣 . 1️⃣ Samples generated by MMD flow exhibit 'super-converg….
0
10
0
@DimitriMeunier1
Dimitri Meunier
3 months
RT @pie_novelli: New preprint out on arXiv: "Self-Supervised Evolution Operator Learning for High-Dimensional Dynamical Systems"!. Read it….
0
3
0
@DimitriMeunier1
Dimitri Meunier
3 months
RT @neu_rips: wanna know how to do inverse Q-learning right? read this paper then!!.joint work with the best team of students ever ♥️.
0
1
0
@DimitriMeunier1
Dimitri Meunier
3 months
RT @antoine_mln: new preprint with the amazing @LucaViano4 and @neu_rips on offline imitation learning!. when the expert is hard to represe….
0
6
0
@DimitriMeunier1
Dimitri Meunier
3 months
TL;DR:. ✅ Theoretical guarantees for nonlinear meta-learning.✅ Explains when and how aggregation helps.✅ Connects RKHS regression, subspace estimation & meta-learning. Co-led with @lzy_michael 🙌, with invaluable support from @ArthurGretton, Samory Kpotufe.
0
2
3
@DimitriMeunier1
Dimitri Meunier
3 months
Even with nonlinear representation you can estimate the shared structure at a rate improving in both N (tasks) and n (samples per task). This leads to parametric rates on the target task!⚡. Bonus: for linear kernels, our results recover known linear meta-learning rates.
1
0
1
@DimitriMeunier1
Dimitri Meunier
3 months
Short answer: Yes ✅. Key idea💡: Instead of learning each task well, under-regularise per-task estimators to better estimate the shared subspace in the RKHS. Even though each task is noisy, their span reveals the structure we care about. Bias-variance tradeoff in action.
1
0
1
@DimitriMeunier1
Dimitri Meunier
3 months
Our paper analyses a meta-learning setting where tasks share a finite dimensional subspace of a Reproducing Kernel Hilbert Space. Can we still estimate this shared representation efficiently — and learn new tasks fast?.
1
0
1
@DimitriMeunier1
Dimitri Meunier
3 months
Most prior theory assumes linear structure: All tasks share a linear representation, and task-specific parts are also linear. Then: we can show improved learning rates as the number of tasks increases. But reality is nonlinear. What then?.
1
0
1
@DimitriMeunier1
Dimitri Meunier
3 months
Meta-learning = using many related tasks to help learn new ones faster. In practice (e.g. with neural nets), this usually means learning a shared representation across tasks — so we can train quickly on unseen ones. But: what’s the theory behind this? 🤔.
1
0
1
@DimitriMeunier1
Dimitri Meunier
3 months
🚨 New paper accepted at SIMODS! 🚨.“Nonlinear Meta-learning Can Guarantee Faster Rates”. When does meta learning work? Spoiler: generalise to new tasks by overfitting on your training tasks!. Here is why: .🧵👇.
Tweet card summary image
arxiv.org
Many recent theoretical works on \emph{meta-learning} aim to achieve guarantees in leveraging similar representational structures from related tasks towards simplifying a target task. The main aim...
8
15
67
@DimitriMeunier1
Dimitri Meunier
3 months
RT @ArthurGretton: Square loss, heavy-tailed noise. not as bad as you think!.
0
7
0
@DimitriMeunier1
Dimitri Meunier
3 months
RT @Chau9991: 🧠 How do we compare uncertainties that are themselves imprecisely specified?. 💡Meet IIPM (Integral IMPRECISE probability metr….
0
11
0