Clément Bonet Profile
Clément Bonet

@Clement_Bonet_

Followers
231
Following
450
Media
10
Statuses
40

Postdoctoral researcher at ENSAE interested in Optimal Transport.

Paris, France
Joined January 2021
Don't wanna be here? Send us removal request.
@Clement_Bonet_
Clément Bonet
27 days
RT @SibylleMarcotte: In Vancouver for #ICML2025! I'll present our work during oral session 1D at 10:45 tomorrow (Ballroom C), followed by p….
0
3
0
@Clement_Bonet_
Clément Bonet
27 days
RT @Clement_Bonet_: With Christophe, we will present our work tuesday. 📍Oral: West Ballroom D, Poster: East Exhibition Hall A-B #E-1300.📅….
0
1
0
@Clement_Bonet_
Clément Bonet
27 days
RT @Clement_Bonet_: 🎉 Happy to share that our work "Flowing Datasets with Wasserstein over Wasserstein Gradient Flows" was accepted at #ICM….
0
23
0
@Clement_Bonet_
Clément Bonet
1 month
RT @BIRS_Math: Clément Bonet (ENSAE/CREST), Flowing Datasets with Wasserstein over Wasserstein Gradient Flows
0
1
0
@Clement_Bonet_
Clément Bonet
1 month
With Christophe, we will present our work tuesday. 📍Oral: West Ballroom D, Poster: East Exhibition Hall A-B #E-1300.📅 Tuesday, July 15th, 4 p.m. for the Oral, and between 4:30 p.m. and 7 p.m for the Poster. See you there!.
0
1
4
@Clement_Bonet_
Clément Bonet
1 month
We apply this scheme to minimize the MMD with kernels based on the Sliced-Wasserstein distance. And as applications, we flow dataset of images to solve tasks such as transfer learning and dataset distillation.
Tweet media one
Tweet media two
1
0
2
@Clement_Bonet_
Clément Bonet
1 month
We leverage this gradient to do optimization over this space. We update each particle using this gradient, and observe several layers of interactions, between the particles and between the classes.
Tweet media one
1
0
0
@Clement_Bonet_
Clément Bonet
1 month
To solve this task, we endow the space with the Wasserstein over Wasserstein (WoW) distance, and exploit its Riemannian structure. It gives us a way to define a notion of gradient.
Tweet media one
1
0
0
@Clement_Bonet_
Clément Bonet
1 month
In our work, we propose to model labeled datasets as probability over probability distributions, and to frame the task of flowing datasets as a minimization of a discrepancy over this space.
Tweet media one
2
0
1
@Clement_Bonet_
Clément Bonet
1 month
🎉 Happy to share that our work "Flowing Datasets with Wasserstein over Wasserstein Gradient Flows" was accepted at #ICML2025 as an oral!. This is a joint work with the amazing Christophe Vauthier and @Korba_Anna!. Link:
1
23
100
@Clement_Bonet_
Clément Bonet
1 month
RT @lomarchehab: Check out this great work led by @HanlinYu2025 and follow him!. We learn the time score faster and more accurately (in pix….
0
2
0
@Clement_Bonet_
Clément Bonet
3 months
RT @KempnerInst: If you're at #AISTATS2025, check out the presentation by Jonathan Geuter, in collaboration with @Clement_Bonet_ , @Korba_A….
0
6
0
@Clement_Bonet_
Clément Bonet
5 months
RT @JmlrOrg: 'Sliced-Wasserstein Distances and Flows on Cartan-Hadamard Manifolds', by Clément Bonet, Lucas Drumetz, Nicolas Courty. https….
0
3
0
@Clement_Bonet_
Clément Bonet
6 months
RT @TmlrPub: Slicing Unbalanced Optimal Transport. Clément Bonet, Kimia Nadjahi, Thibault Sejourne, Kilian FATRAS, Nicolas Courty. Action….
openreview.net
Optimal transport (OT) is a powerful framework to compare probability measures, a fundamental task in many statistical and machine learning problems. Substantial advances have been made in...
0
2
0
@Clement_Bonet_
Clément Bonet
8 months
RT @RFlamary: Today something crazy happened. POT has reached 1000 citations (total) 🤩🚀. Very proud to be part of a scientific community th….
0
20
0
@Clement_Bonet_
Clément Bonet
8 months
RT @JamesTThorn: From Diffusion Models to Schrödinger Bridges . - A shame not to see this NeurIPS keynote live, by the incredible @ArnaudDo….
0
23
0
@Clement_Bonet_
Clément Bonet
8 months
RT @theo_uscidda: Excited about Wasserstein gradient flows? We extend mirror and preconditioned gradient descent on the space of probabilit….
0
9
0
@Clement_Bonet_
Clément Bonet
8 months
I will be presenting the paper thursday at NeurIPS@Paris and next week at #NeurIPS2024.
0
1
2
@Clement_Bonet_
Clément Bonet
8 months
We use Mirror Descent to minimize the Kullback-Leibler divergence with a Gaussian target. With a well chosen potential energy or the entropy as Bregman potential, the algorithm converges faster than the forward backward algorithm.
Tweet media one
1
0
2