ask1729 Profile Banner
Aditi Krishnapriyan Profile
Aditi Krishnapriyan

@ask1729

Followers
783
Following
141
Media
29
Statuses
73

Assistant Professor at UC Berkeley

Berkeley, CA
Joined January 2016
Don't wanna be here? Send us removal request.
@ask1729
Aditi Krishnapriyan
23 days
9/ This was a very fun project to work on with an amazing team (Sanjeev Raja, Martin Sipka, Michael Psenka, Toby Kreiman, Michal Pavelka) and a great way to explore statistical physics + generative modeling connections!. Paper:
0
0
14
@ask1729
Aditi Krishnapriyan
23 days
8/ Our method is a way to turn any score-based generative model of IID molecular configurations into an efficient hypothesis generator for dynamical events, without system-specific training. We're excited to advance this method as generative models continue to scale and improve!.
1
0
3
@ask1729
Aditi Krishnapriyan
23 days
7/ We show that the path ensemble sampled by our OM optimization approach can be used to compute transition rates. Specifically, we can estimate the committor function by minimizing a well-known variational objective over the sampled paths, from which we can compute the rate
Tweet media one
1
0
2
@ask1729
Aditi Krishnapriyan
23 days
6/ We show examples where our method generalizes beyond the original training data, predicting accurate transition paths in systems not seen by the generative model, and when data from the transition regions is selectively removed.
1
0
2
@ask1729
Aditi Krishnapriyan
23 days
5/ Results on protein folding transitions show our method efficiently generates diverse, physically meaningful pathways, closely matching reference molecular dynamics (MD) simulations
Tweet media one
1
0
2
@ask1729
Aditi Krishnapriyan
23 days
4/ The inherent stochasticity of generative models can be used to efficiently produce many diverse initial guesses for the transition path. Each initial guess relaxes into its nearby local minimum during OM optimization, allowing us to approximate the transition path ensemble
Tweet media one
1
0
3
@ask1729
Aditi Krishnapriyan
23 days
3/ We interpret transition paths as realizations of stochastic dynamics induced by pre-trained diffusion and flow matching models. This turns path sampling into an OM action-minimization problem, with the learned score function being a "force field" over the data distribution.
1
0
6
@ask1729
Aditi Krishnapriyan
23 days
2/ Our approach leverages least action principles by minimizing the Onsager-Machlup (OM) functional. Normally, this approach can only find a single minimum-energy path; this is where generative models come in handy! :)
Tweet media one
1
0
3
@ask1729
Aditi Krishnapriyan
23 days
1/ Generating transition pathways (e.g., folded ↔ unfolded protein) is a huge challenge: we tackle this by combining the scalability of pre-trained, score-based generative models and statistical mechanics insights-—no training required! To appear at #ICML2025
2
33
257
@ask1729
Aditi Krishnapriyan
2 months
Excited to see the OMol25 dataset out! This was really fun to collaborate on 😃.
@SamMBlau
Sam Blau
2 months
The Open Molecules 2025 dataset is out! With >100M gold-standard ωB97M-V/def2-TZVPD calcs of biomolecules, electrolytes, metal complexes, and small molecules, OMol is by far the largest, most diverse, and highest quality molecular DFT dataset for training MLIPs ever made 1/N
Tweet media one
3
3
68
@ask1729
Aditi Krishnapriyan
3 months
A knowledge distillation approach to get fast, specialized machine learning force fields: this work will be presented at #ICLR2025 this Sat (Apr 26, 10 AM) at Poster Session 5 + it will also be a spotlight talk at the #AI4Mat workshop on Apr 28!.
@ask1729
Aditi Krishnapriyan
4 months
1/ Machine learning force fields are hot right now 🔥: models are getting bigger + being trained on more data. But how do we balance size, speed, and specificity? We introduce a method for doing model distillation on large-scale MLFFs into fast, specialized MLFFs!.
0
2
22
@ask1729
Aditi Krishnapriyan
4 months
7/ This was a very fun project with Ishan Amin and Sanjeev Raja, who will be presenting this at #ICLR2025! Paper and code below:. Paper: Code:
1
1
12
@ask1729
Aditi Krishnapriyan
4 months
6/ The distilled MLFFs are much faster to run than the original large-scale MLFF: not everyone has the GPU resources to use big models and many scientists only care about studying specific systems (w/ the correct physics!). This is a way to get the best of all worlds!
Tweet media one
1
1
5
@ask1729
Aditi Krishnapriyan
4 months
5/ We can also balance training at scale efficiently (often w/ minimal constraints) with distilling the correct physics into the small MLFF at test time: e.g., taking energy gradients to get conservative forces, and ensuring energy conservation for molecular dynamics.
Tweet media one
1
0
4
@ask1729
Aditi Krishnapriyan
4 months
4/ Smaller, specialized MLFFs distilled from the large-scale model are more accurate than training from scratch on the same subset of data: the representations from the large-scale model help boost performance, while the smaller models are much faster to run
Tweet media one
Tweet media two
Tweet media three
1
0
3
@ask1729
Aditi Krishnapriyan
4 months
3/ We formulate our distillation procedure as the smaller MLFF is trained to match Hessians of the energy predictions of the large-scale model (using subsampling methods to improve efficiency). This works better than distillation methods to try to match features.
Tweet media one
1
0
6
@ask1729
Aditi Krishnapriyan
4 months
2/ Model distillation involves transferring the general-purpose representations learned by a large-scale model into smaller, faster models: in our case, specialized to specific regions of chemical space. We can use these faster MLFFs for a variety of downstream tasks.
Tweet media one
1
0
6
@ask1729
Aditi Krishnapriyan
4 months
1/ Machine learning force fields are hot right now 🔥: models are getting bigger + being trained on more data. But how do we balance size, speed, and specificity? We introduce a method for doing model distillation on large-scale MLFFs into fast, specialized MLFFs!.
2
20
102
@ask1729
Aditi Krishnapriyan
7 months
Scaling neural network interatomic potentials for molecular and materials simulation: this work will be presented at #NeurIPS2024 this Fri (Dec 13, 11 AM), at Poster Session 5 East! . Code is now publicly available:
@ask1729
Aditi Krishnapriyan
9 months
1/ What are key design principles for scaling neural network interatomic potentials? Our exploration leads us to top results on the Open Catalyst Project (OC20, OC22), SPICE, and MPTrj, with vastly improved efficiency!. Accepted at #NeurIPS2024:
Tweet media one
1
13
71
@ask1729
Aditi Krishnapriyan
9 months
Really cool use case of a method applied on top of EScAIP: in this case, a new method for transition state search, which requires fast inference and a low memory footprint of the underlying NNIP.
@SamMBlau
Sam Blau
9 months
@ask1729 Here's a reaction path generated by @ericyuan_00000 using our new path optimization method (named Popcornn, still under development) atop EScAIP trained on OC20, where we find a much lower barrier than NEB. EScAIP's speed & low memory footprint are critical for this calculation!
0
1
5