• Stars
    star
    501
  • Rank 88,002 (Top 2 %)
  • Language
    Python
  • License
    Other
  • Created over 2 years ago
  • Updated about 1 month ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

MACE - Fast and accurate machine learning interatomic potentials with higher order equivariant message passing.

MACE

GitHub release Paper License GitHub issues Documentation Status

Table of contents

About MACE

MACE provides fast and accurate machine learning interatomic potentials with higher order equivariant message passing.

This repository contains the MACE reference implementation developed by Ilyes Batatia, Gregor Simm, and David Kovacs.

Also available:

  • MACE in JAX, currently about 2x times faster at evaluation, but training is recommended in Pytorch for optimal performances.
  • MACE layers for constructing higher order equivariant graph neural networks for arbitrary 3D point clouds.

Documentation

A partial documentation is available at: https://mace-docs.readthedocs.io/en/latest/

Installation

Requirements:

(for openMM, use Python = 3.9)

conda installation

If you do not have CUDA pre-installed, it is recommended to follow the conda installation process:

# Create a virtual environment and activate it
conda create --name mace_env
conda activate mace_env

# Install PyTorch
conda install pytorch torchvision torchaudio pytorch-cuda=11.6 -c pytorch -c nvidia

# (optional) Install MACE's dependencies from Conda as well
conda install numpy scipy matplotlib ase opt_einsum prettytable pandas e3nn

# Clone and install MACE (and all required packages)
git clone [email protected]:ACEsuit/mace.git 
pip install ./mace

pip installation

To install via pip, follow the steps below:

# Create a virtual environment and activate it
python -m venv mace-venv
source mace-venv/bin/activate

# Install PyTorch (for example, for CUDA 11.6 [cu116])
pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu116

# Clone and install MACE (and all required packages)
git clone [email protected]:ACEsuit/mace.git
pip install ./mace

Note: The homonymous package on PyPI has nothing to do with this one.

Usage

Training

To train a MACE model, you can use the run_train.py script:

python ./mace/scripts/run_train.py \
    --name="MACE_model" \
    --train_file="train.xyz" \
    --valid_fraction=0.05 \
    --test_file="test.xyz" \
    --config_type_weights='{"Default":1.0}' \
    --E0s='{1:-13.663181292231226, 6:-1029.2809654211628, 7:-1484.1187695035828, 8:-2042.0330099956639}' \
    --model="MACE" \
    --hidden_irreps='128x0e + 128x1o' \
    --r_max=5.0 \
    --batch_size=10 \
    --max_num_epochs=1500 \
    --swa \
    --start_swa=1200 \
    --ema \
    --ema_decay=0.99 \
    --amsgrad \
    --restart_latest \
    --device=cuda \

To give a specific validation set, use the argument --valid_file. To set a larger batch size for evaluating the validation set, specify --valid_batch_size.

To control the model's size, you need to change --hidden_irreps. For most applications, the recommended default model size is --hidden_irreps='256x0e' (meaning 256 invariant messages) or --hidden_irreps='128x0e + 128x1o'. If the model is not accurate enough, you can include higher order features, e.g., 128x0e + 128x1o + 128x2e, or increase the number of channels to 256.

It is usually preferred to add the isolated atoms to the training set, rather than reading in their energies through the command line like in the example above. To label them in the training set, set config_type=IsolatedAtom in their info fields. If you prefer not to use or do not know the energies of the isolated atoms, you can use the option --E0s="average" which estimates the atomic energies using least squares regression.

If the keyword --swa is enabled, the energy weight of the loss is increased for the last ~20% of the training epochs (from --start_swa epochs). This setting usually helps lower the energy errors.

The precision can be changed using the keyword --default_dtype, the default is float64 but float32 gives a significant speed-up (usually a factor of x2 in training).

The keywords --batch_size and --max_num_epochs should be adapted based on the size of the training set. The batch size should be increased when the number of training data increases, and the number of epochs should be decreased. An heuristic for initial settings, is to consider the number of gradient update constant to 200 000, which can be computed as $\text{max-num-epochs}*\frac{\text{num-configs-training}}{\text{batch-size}}$.

The code can handle training set with heterogeneous labels, for example containing both bulk structures with stress and isolated molecules. In this example, to make the code ignore stress on molecules, append to your molecules configuration a config_stress_weight = 0.0.

To use Apple Silicon GPU acceleration make sure to install the latest PyTorch version and specify --device=mps.

Evaluation

To evaluate your MACE model on an XYZ file, run the eval_configs.py:

python3 ./mace/scripts/eval_configs.py \
    --configs="your_configs.xyz" \
    --model="your_model.model" \
    --output="./your_output.xyz"

Tutorial

You can run our Colab tutorial to quickly get started with MACE.

Weights and Biases for experiment tracking

If you would like to use MACE with Weights and Biases to log your experiments simply install with

pip install ./mace[wandb]

And specify the necessary keyword arguments (--wandb, --wandb_project, --wandb_entity, --wandb_name, --wandb_log_hypers)

Development

We use black, isort, pylint, and mypy. Run the following to format and check your code:

bash ./scripts/run_checks.sh

We have CI set up to check this, but we highly recommend that you run those commands before you commit (and push) to avoid accidentally committing bad code.

We are happy to accept pull requests under an MIT license. Please copy/paste the license text as a comment into your pull request.

References

If you use this code, please cite our papers:

@inproceedings{
Batatia2022mace,
title={{MACE}: Higher Order Equivariant Message Passing Neural Networks for Fast and Accurate Force Fields},
author={Ilyes Batatia and David Peter Kovacs and Gregor N. C. Simm and Christoph Ortner and Gabor Csanyi},
booktitle={Advances in Neural Information Processing Systems},
editor={Alice H. Oh and Alekh Agarwal and Danielle Belgrave and Kyunghyun Cho},
year={2022},
url={https://openreview.net/forum?id=YPpSngE-ZU}
}

@misc{Batatia2022Design,
  title = {The Design Space of E(3)-Equivariant Atom-Centered Interatomic Potentials},
  author = {Batatia, Ilyes and Batzner, Simon and Kov{\'a}cs, D{\'a}vid P{\'e}ter and Musaelian, Albert and Simm, Gregor N. C. and Drautz, Ralf and Ortner, Christoph and Kozinsky, Boris and Cs{\'a}nyi, G{\'a}bor},
  year = {2022},
  number = {arXiv:2205.06643},
  eprint = {2205.06643},
  eprinttype = {arxiv},
  doi = {10.48550/arXiv.2205.06643},
  archiveprefix = {arXiv}
 }

Contact

If you have any questions, please contact us at [email protected].

For bugs or feature requests, please use GitHub Issues.

License

MACE is published and distributed under the MIT License.

More Repositories

1

ACE.jl

Parameterisation of Equivariant Properties of Particle Systems
Julia
65
star
2

mace-jax

Equivariant machine learning interatomic potentials in JAX.
Python
57
star
3

ACEpotentials.jl

Machine Learning Interatomic Potentials with the Atomic Cluster Expansion
Julia
46
star
4

mace-mp

MACE-MP models
Shell
40
star
5

mace-layer

Higher order equivariant graph neural networks for 3D point clouds
Python
32
star
6

ACE1.jl

Atomic Cluster Expansion for Modelling Invariant Atomic Properties
Julia
20
star
7

mace-off

MACE-OFF23 models
Shell
19
star
8

ObjectPools.jl

thread-safe and flexible temporary arrays and object cache for semi-manual memory management
Julia
13
star
9

ACEhamiltonians.jl

Julia
12
star
10

Polynomials4ML.jl

Polynomials for ML: fast evaluation, batching, differentiation
Julia
12
star
11

EquivariantModels.jl

Tools for geometric learning
Julia
11
star
12

ACEHAL

Python
11
star
13

ACEfit.jl

Generic Codes for Fitting ACE models
Julia
7
star
14

ACEhamiltoniansExamples

Example code for the ACEhamiltonians code-base.
Jupyter Notebook
6
star
15

IPFitting.jl

Fitting of NBodyIPs
Julia
5
star
16

ACEds.jl

Coarse-grained dynamical systems
Julia
5
star
17

ACEmd.jl

Julia
5
star
18

UltraFastACE.jl

Experimenting with faster ACE potentials
Julia
3
star
19

ACE1x.jl

Experimental features for ACE1.jl
Julia
3
star
20

ACE1docs.jl

User Documentation for ACEsuit
Jupyter Notebook
3
star
21

ACEpsi.jl

ACE wave function parameterizations
Julia
2
star
22

BIPs.jl

Boost-Invariant Polynomials for jet tagging
Julia
2
star
23

ACEsktb.jl

experimental code for tight-binding hamiltonians
Julia
2
star
24

ACEcore.jl

Some of the core computational kernels for building ACE models
Julia
2
star
25

ACEatoms.jl

Generic code for modelling atomic properties using ACE
Julia
2
star
26

DecoratedParticles.jl

Julia
2
star
27

GeomOpt.jl

Geometry optimization interface
Julia
1
star
28

HyperActiveLearning.jl

An accelerate dynamics stragety for collecting training data.
Julia
1
star
29

ACEcalculator.jl

Evaluate ACE interatomic potentials and interfaces
1
star
30

ACEdocs_old

Documentaton for ACEsuit
Julia
1
star
31

WithAlloc.jl

A simple Bumper convenience extension
Julia
1
star
32

ACEluxpots.jl

ACE potentials via EquivariantModels and Lux
Julia
1
star
33

RepLieGroups.jl

Representations of Lie Groups
Julia
1
star
34

ace.cpp

Experimental C++ routines for the ACE basis
1
star