Bálint Máté
@balintmt
Followers
204
Following
392
Media
15
Statuses
42
PhD student at @unige_en
Joined February 2021
new preprint on solvation free energies: tl;dr: We define an interpolating density by its sampling process, and learn the corresponding equilibrium potential with score matching. https://t.co/sO0EI4sy5r with @francoisfleuret and @tristanbereau (1/n)
3
19
100
Now accepted at #NeurIPS2025! We basically speed up diffusion models for sampling molecular conformations by 30x :) More exciting stuff coming soon!
Really excited to (finally) share the updated JAMUN preprint and codebase! We perform Langevin molecular dynamics in a smoothed space which allows us to take larger integrator steps. This requires learning a score function only at a single noise level, unlike diffusion models.
1
2
38
🚀 After two years of intense research, we’re thrilled to introduce Skala — a scalable DL density functional that hits chemical accuracy on atomization energies and matches hybrid-level performance on main group chemistry — all at the cost of a semi-local functional. ⚛️🔥🧪⚗️🧬
1
31
189
🚀 After two+ years of intense research, we’re thrilled to introduce Skala — a scalable deep learning density functional that hits chemical accuracy on atomization energies and matches hybrid-level accuracy on main group chemistry — all at the cost of semi-local DFT. ⚛️🔥🧪🧬
5
62
284
Finally, we also look at what happens if we predict the hydration free energy of methane using the potential that was trained on water (and vice versa). (10/10)
0
0
2
The approach is tested on the estimation of hydration free energies of rigid water and methane (LJ + Coulomb interactions). We find good agreement with experimental reference values. (9/n)
1
0
2
We then parametrize the interpolating potential with a neural network and train it to be the equilibrium potential corresponding to the samples. Since the endpoint Hamiltonians are also available, we do this with target score matching. (8/n) https://t.co/9ttv7bpfjK
arxiv.org
Denoising Score Matching estimates the score of a noised version of a target distribution by minimizing a regression loss and is widely used to train the popular class of Denoising Diffusion...
1
0
2
We do this by simply taking the geodesic interpolation between pairs of samples from the endpoint distributions. This is, of course, inspired by flow matching/stochastic interpolants. (7/n) https://t.co/zC6XGxK4gB
https://t.co/R8sJDyEnn9
arxiv.org
A class of generative models that unifies flow-based and diffusion-based methods is introduced. These models extend the framework proposed in Albergo and Vanden-Eijnden (2023), enabling the use of...
1
0
3
In this work, we go the other way around, and define the interpolation by the sampling process of the intermediate densities. (6/n)
1
0
2
For TI, this means that we are free to choose one way of describing this interpolation, and the hard part is getting the other one. Usually one chooses the interpolation of potentials and performs simulations at a sequence of intermediate potentials to obtain samples. (5/n)
1
0
1
Note that (1) and (2) define the same object, a one-parameter family of probability densities interpolating between the endpoint Boltzmann distributions. (4/n)
1
0
1
Thus, to numerically estimate the free-energy difference, two things are necessary: (1) an interpolating family of potentials and (2) samples from the Boltzmann densities of the intermediate potentials to estimate the expectation value in the integrand. (3/n)
1
0
1
Thermodynamic Integration (TI) computes the free energy difference between two potentials as an integral over a coupling variable parametrising an interpolation between the two potentials. (2/n)
1
0
1
That’s a wrap! 🤩🙏🏻 Thank you everyone for making this possible and so enjoyable! Stay tuned for a second edition in a couple of years! 😉 #ML4PhysChem #workshop #generativeAI #AI4Science
0
4
22
To build the next generation of intelligent agents, developing efficient world models is essential. We introduce Δ-IRIS, an agent that learns behaviors by imagining millions of trajectories in its world model. Paper: https://t.co/mQx2Dg75hS Code: https://t.co/3vmI2PIz7H 🧵👇
12
50
289
To validate all this, we compare the estimates of the average density and the excess chemical potential to grand canonical MC simulations. (6/n)
1
0
3
Given the estimates of the canonical free energies at fixed particle count we end up with a grand canonical sampler at any choice of the chemical potential. (5/n)
1
0
4
As an application, we estimate the free energy of a periodic Lennard-Jones liquid at various densities. (4/n)
1
0
5
Given that we also have access to the temporal gradient of the learnt potential via autograd, and can sample all equilibrium distributions of the intermediate potentials, we have all the ingredients for thermodynamic integration. (3/n)
1
0
4
We learn a time-dependent potential interpolating between the target and the prior potentials and optimize its force (negative spatial gradient) with a denoising diffusion objective. (2/n)
1
0
5