
Aditi Krishnapriyan
@ask1729
Followers
783
Following
141
Media
29
Statuses
73
Assistant Professor at UC Berkeley
Berkeley, CA
Joined January 2016
1/ Generating transition pathways (e.g., folded ↔ unfolded protein) is a huge challenge: we tackle this by combining the scalability of pre-trained, score-based generative models and statistical mechanics insights-—no training required! To appear at #ICML2025
2
33
257
Excited to see the OMol25 dataset out! This was really fun to collaborate on 😃.
The Open Molecules 2025 dataset is out! With >100M gold-standard ωB97M-V/def2-TZVPD calcs of biomolecules, electrolytes, metal complexes, and small molecules, OMol is by far the largest, most diverse, and highest quality molecular DFT dataset for training MLIPs ever made 1/N
3
3
68
A knowledge distillation approach to get fast, specialized machine learning force fields: this work will be presented at #ICLR2025 this Sat (Apr 26, 10 AM) at Poster Session 5 + it will also be a spotlight talk at the #AI4Mat workshop on Apr 28!.
1/ Machine learning force fields are hot right now 🔥: models are getting bigger + being trained on more data. But how do we balance size, speed, and specificity? We introduce a method for doing model distillation on large-scale MLFFs into fast, specialized MLFFs!.
0
2
22
Scaling neural network interatomic potentials for molecular and materials simulation: this work will be presented at #NeurIPS2024 this Fri (Dec 13, 11 AM), at Poster Session 5 East! . Code is now publicly available:
1/ What are key design principles for scaling neural network interatomic potentials? Our exploration leads us to top results on the Open Catalyst Project (OC20, OC22), SPICE, and MPTrj, with vastly improved efficiency!. Accepted at #NeurIPS2024:
1
13
71
Really cool use case of a method applied on top of EScAIP: in this case, a new method for transition state search, which requires fast inference and a low memory footprint of the underlying NNIP.
@ask1729 Here's a reaction path generated by @ericyuan_00000 using our new path optimization method (named Popcornn, still under development) atop EScAIP trained on OC20, where we find a much lower barrier than NEB. EScAIP's speed & low memory footprint are critical for this calculation!
0
1
5