
Mathurin Massias
@mathusmassias
Followers
2K
Following
2K
Media
80
Statuses
369
Researcher @INRIA_lyon, Ockham team. Teacher @Polytechnique and @ENSdeLyon Machine Learning, Python and Optimization
Paris, France
Joined July 2018
New paper on the generalization of Flow Matching https://t.co/BJMHUnY6xJ ๐คฏ Why does flow matching generalize? Did you know that the flow matching target you're trying to learn **can only generate training points**? with @Qu3ntinB, Anne Gagneux & Rรฉmi Emonet ๐๐๐
17
257
1K
I am thrilled to announce that our work on the generalization of flow matching has been accepted to NeurIPS as an oral!! See you in San Diego ๐
New paper on the generalization of Flow Matching https://t.co/BJMHUnY6xJ ๐คฏ Why does flow matching generalize? Did you know that the flow matching target you're trying to learn **can only generate training points**? with @Qu3ntinB, Anne Gagneux & Rรฉmi Emonet ๐๐๐
5
67
589
One-day workshop on Diffusion models and Flow matching, October 24th at ENS Lyon Registration and call for contributions (short talk and poster) are open at
gdr-iasis.cnrs.fr
Les inscriptions ร la rรฉunion sont closes, mais il est toujours possible de participer ร distance. Un lien Zoom sera affichรฉ sur cette page quelques jours avant la rรฉunion. Physical registration is...
1
7
22
This is happening today, 6pm Paris time!
Don't forget to join us tomorrow, July 3rd as we host @tonysilveti for a session on "Training neural networks at any scale" Learn more:
0
3
20
Then why does flow matching generalize?? Because it fails! The inductive bias of the neural network prevents from perfectly learning u* and overfitting In particular networks fail to learn the velocity field for two particular time values See the paper for a finer analysis ๐
1
5
59
We propose to regress directly against the optimal (deterministic) u* and show that it never degrades the performance On the opposite, removing target stochasticity helps generalizing faster.
1
2
25
Yet FM generates new samples! An hypothesis to explain this paradox is target stochasticity: FM targets the conditional velocity field i.e. only a stochastic approximation of the full velocity field u* *We refute this hypothesis*: very early, the approximation almost equals u*
1
2
36
Same here. My solution is I gave "very low" grade to 99% of the papers
This is quite strange, I received a batch of papers that mostly do not relate to my expertise. @NeurIPSConf
0
3
9
On Saturday Anne will also present some very, very cool work on how to leverage Flow Matching models to obtain sota Plug and Play methods: PnP-Flow: Plug-and-Play Image Restoration with Flow Matching, poster #150 in poster session 6, Saturday at 3 pm https://t.co/0GRNZd3l8O
arxiv.org
In this paper, we introduce Plug-and-Play (PnP) Flow Matching, an algorithm for solving imaging inverse problems. PnP methods leverage the strength of pre-trained denoisers, often deep neural...
0
0
3
It was received quite enthusiastically here so time to share it again!!! Our #ICLR2025 blog post on Flow M atching was published yesterday : https://t.co/2V5BLl6T2p My PhD student Anne Gagneux will present it tomorrow in ICLR, ๐poster session 4, 3 pm, #549 in Hall 3/2B ๐
1
5
11
MLSS is coming to Senegal ๐ธ๐ณ in 2025! ๐ ๐ AIMS Mbour, Senegal ๐
June 23 - July 4, 2025 An international summer school to explore, collaborate, and deepen your understanding of machine learning in a unique and welcoming environment. Details:
2
33
60
For people who could not attend the #NeurIPS2024 tutorial on flow-matching, here is a friendly introduction! https://t.co/IJIO2Bzo8X
0
93
699
The optimization loss in FM is easy to evaluate, and does not require integration like in CNF. The whole process is smooth! The illustrations are much nicer in the blog post, go read it ! ๐๐ https://t.co/TSkg1VZ5cn ๐๐
1
2
12
FM learns a vector field u pushing the base distrib to the target through an ODE To learn it, introduce a conditioning random variable, breaking the pb into smaller ones that have closed form solutions Here's the magic: the small problems can be used to solve the original one!
1
1
9
FM is a technique to train continuous normalizing flows (CNF) that progressively transform a simple base distrib to the target one 2 benefits -no need for likelihoods nor ODE in training -makes the pb better posed by defining a *unique sequence of densities* from base to target
1
1
14
Anne Gagneux, Sรฉgolรจne Martin, @qu3ntinb, Remi Emonet and I wrote a tutorial blog post on flow matching: https://t.co/TSkg1VZ5cn with lots of illustrations and intuition! We got this idea after their cool work on improving Plug and Play with FM: https://t.co/0GRNZd3l8O
5
98
585
Looks like the last reason to stay here won't remain valid for long !
0
0
8
btw this is also my first post on ๐ฆ with the handle mathurinmassias ๐
0
0
2
New blog post: the Hutchinson trace estimator, or how to evaluate divergence/Jacobian trace cheaply. Fundamental for Continuous Normalizing Flows https://t.co/W94Q08QIRx
2
18
122