Brandon Wood
@bwood_m
Followers
785
Following
405
Media
29
Statuses
184
Research Scientist at FAIR, @MetaAI @OpenCatalyst. Affiliate @LBNL.
Joined September 2020
🚀Exciting news! We are releasing new UMA-1.1 models (Small and Medium) today and the UMA paper is now on arxiv! UMA represents a step-change in what’s possible with a single machine learning interatomic potential (short overview in the post below). The goal was to make a model
2
27
124
🚀With Climate Week NYC going on now, I’m excited to announce the @Meta FAIR Chemistry team's latest release - The Open Catalyst 2025 (OC25) Dataset and Models for Solid-Liquid Interfaces! Paper: https://t.co/BqQN2Qrw8u Dataset+Models: https://t.co/cCXuSFMF8I 1/n
1
12
28
Excited to present the FAIR Chemistry Leaderboard - a centralized space for our team’s community benchmark efforts. We’re kicking things off today with the OMol25 leaderboard! 📊Leaderboard: https://t.co/OVRTHWkniu 🖥️Code: https://t.co/kGsXE414kC
3
17
48
📢Interested in doing a PhD in generative models 🤖, AI4Science 🧬, Sampling 🧑🔬, and beyond? I am hiring PhD students at Imperial College London @ICComputing for the next application cycle. 🔗See the call below: https://t.co/kAG4qdTHXt And a light expression of interest:
5
61
222
The Meta FAIR Chemistry team continues to make meaningful strides. 1️⃣ Today we’re announcing FastCSP, a workflow that generates stable crystal structures for organic molecules. This accelerates material discovery efforts and cuts down the time to design molecular crystals from
34
88
402
Excited to share ODAC25! 🎉 Nearly 70M DFT calculations for direct air capture - expanding beyond ODAC23 with new adsorbates, functionalized MOFs, synthetic MOFs, and improved accuracy. All data + models open sourced for the community to accelerate DAC sorbent discovery.
We’re excited to introduce the Open Direct Air Capture 2025 dataset, the largest open dataset for discovering advanced materials that capture CO2 directly from the air. Developed by Meta FAIR, @GeorgiaTech, and @cusp_ai, this release enables rapid, accurate screening of carbon
2
7
27
We’re excited to introduce the Open Direct Air Capture 2025 dataset, the largest open dataset for discovering advanced materials that capture CO2 directly from the air. Developed by Meta FAIR, @GeorgiaTech, and @cusp_ai, this release enables rapid, accurate screening of carbon
37
108
579
That's a wrap for ICML 2025! 🎉 🇨🇦 Fun to present/discuss a bunch of recent work. In particular, Ray Gao and I presented: Learning Smooth and Expressive Interatomic Potentials for Physical Property Prediction, which received an oral 🎤 / spotlight poster 🖼️ 🙌 A huge shoutout
arxiv.org
Machine learning interatomic potentials (MLIPs) have become increasingly effective at approximating quantum mechanical calculations at a fraction of the computational cost. However, lower errors...
For existing MLIPs, lower test errors do not always translate to better performance in downstream tasks. We bridge this gap by proposing eSEN -- SOTA performance on compliant Matbench-Discovery (F1 0.831, κSRME 0.321) and phonon prediction. https://t.co/rzpjGm32QL 1/6
0
1
27
A convolutional neural network built with PyTorch is supporting marine conservation efforts by detecting ghost nets in sonar scans with 94% accuracy. Trained and deployed on Azure using NVIDIA A100 GPUs, the model powers GhostNetZero ai. 🔗 Read @NVIDIA's blog to learn more:
developer.nvidia.com
Conservationists have launched a new AI tool that can sift through petabytes of underwater imaging from anywhere in the world to identify signs of abandoned or lost fishing nets—so-called ghost nets.
8
38
370
LLMs are crowded. GNNs for atoms? Just getting started ⚛️ UMA 1.1 is out — a universal MLIP for real-world chemistry, biology, and materials. High-accuracy. Open-source. Huge open field. It feels like AlexNet days. One GPU, one idea = real impact that can change the world!🌎
🚀Exciting news! We are releasing new UMA-1.1 models (Small and Medium) today and the UMA paper is now on arxiv! UMA represents a step-change in what’s possible with a single machine learning interatomic potential (short overview in the post below). The goal was to make a model
1
5
21
uma-s-1.1 is now live in the UMA educational demo! https://t.co/FZOQqIM8i0 The demo featured in two tutorials recently - one at NAM by @johnkitchin and one at LBL by Jagriti Sahoo! We also added some new tutorials on fine-tuning and DAC applications to
facebook-fairchem-uma-demo.hf.space
🚀Exciting news! We are releasing new UMA-1.1 models (Small and Medium) today and the UMA paper is now on arxiv! UMA represents a step-change in what’s possible with a single machine learning interatomic potential (short overview in the post below). The goal was to make a model
2
8
28
UMA-1.1 is now live in fairchem-core-2.3 - https://t.co/ooAEcpJPRL! Read more about the improvements below!
github.com
What’s Changed Release of UMA-s 1.1 (uma-s-1p1) and UMA-m 1.1 (uma-m-1p1) checkpoints as well as the arxiv paper (https://arxiv.org/abs/2506.23971). Major changes UMA-m is the best in class model ...
🚀Exciting news! We are releasing new UMA-1.1 models (Small and Medium) today and the UMA paper is now on arxiv! UMA represents a step-change in what’s possible with a single machine learning interatomic potential (short overview in the post below). The goal was to make a model
1
3
15
Grateful to be part of such an awesome team! @csmisko, @xiangfu_ml, Ray Gao, @mshuaibii, @lbluque, Kareem Abdelmaqsoud, @VGharakhanyan, @johnkitchin, @levine_ds, Kyle Michel, @anuroopsriram, @TacoCohen, @abhshkdz, Ammar Rizvi, Sushree Jagriti Sahoo, @zackulissi, Larry Zitnick
0
1
10
The models aren’t perfect and there are a number of limitations/weaknesses (e.g. long-range interactions, occasional outliers, etc), but we are excited about the step forward. Go try the models for yourself! If you run into issues please let us know as we continue to try and make
1
0
3
Remarkably, UMA models demonstrate strong performance across materials, catalysts, molecules, molecular crystals, and metal organic frameworks without specialized fine-tuning. This result surprised me - from our experience with JMP I expected to build good base representations
1
0
4
In the large parameter regime (~ 700M active parameters), MoLE seems to matter less, likely because we are data bound (you can see this in the scaling plots far right). However, the benefit of multi-task training in the large parameter regime can be seen by preventing overfitting
1
0
3
In the small parameter regime (~ 6M active parameters), MoLE enables multi-task training to improve over specialized single-task baselines which is not possible with a standard multi-task (no MoLE) approach. 5/
1
0
3
The UMA dataset supports large models up to around 700M active parameters, but large dense models can be slow. To address this, we introduce Mixture of Linear Experts (MoLE), to increase model capacity/flexibility without sacrificing inference speed. 4/
1
0
4
There hasn’t ever been a model trained on as much data as UMA, so it wasn’t clear how large of a model was required to fit the dataset. We developed empirical scaling laws to understand this question better and to more generally understand the relationship between compute, data,
1
0
6
We (and others) have observed that models improve with more data (upper right), but DFT data is incredibly computationally expensive. One way to get more data is to pool existing datasets and train across domains and DFT settings. This additionally provides a wide data
1
0
7