• Stars
    star
    232
  • Rank 172,847 (Top 4 %)
  • Language
    Python
  • License
    MIT License
  • Created about 5 years ago
  • Updated over 4 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

The official implementation of the Molecule Attention Transformer.

MAT

The official implementation of the Molecule Attention Transformer. ArXiv

architecture

Code

  • EXAMPLE.ipynb jupyter notebook with an example of loading pretrained weights into MAT,
  • transformer.py file with MAT class implementation,
  • utils.py file with utils functions.

More functionality will be available soon!

Pretrained weights

Pretrained weights are available here

Results

In this section we present the average rank across the 7 datasets from our benchmark.

  • Results for hyperparameter search budget of 500 combinations.

  • Results for hyperparameter search budget of 150 combinations.

  • Results for pretrained model

Requirements

  • PyTorch 1.4

Acknowledgments

Transformer implementation is inspired by The Annotated Transformer.