PyTorch implementations of:
Automated selection of augmentations: Graph Contrastive Learning Automated [talk] [poster] [appendix]
Yuning You, Tianlong Chen, Yang Shen, Zhangyang Wang
In ICML 2021.
Generating augmentations with generative models: Bringing Your Own View: Graph Contrastive Learning without Prefabricated Data Augmentations
Yuning You, Tianlong Chen, Zhangyang Wang, Yang Shen
In WSDM 2022.
In this repository, we propose a principled framework named joint augmentation selection (JOAO), to automatically, adaptively and dynamically select augmentations during GraphCL training. Sanity check shows that the selection aligns with previous "best practices", as shown in Figure 3 of Graph Contrastive Learning Automated (ICML 2021). Corresponding folder names are $Setting_$Dataset.
We further propose leveraging graph generative models to directly generate augmentations (LP for Learned Priors) rather than relying on the prefabricated ones, as shown in Figure 2 of Bringing Your Own View: Graph Contrastive Learning without Prefabricated Data Augmentations (WSDM 2022). Corresponding folder names end with LP: $Setting_$Dataset_LP. Please note that although the study used GraphCL as the base model, leading to GraphCL-LP, the proposed LP framework is more general than that and can use other base models (such as BRGL in Appendix B).
- torch-geometric >= 1.6.0
- ogb == 1.2.4
- Semi-supervised learning [JOAO: TU Datasets] [JOAO: OGB] [GraphCL-LP: TU Datasets] [GraphCL-LP: OGB]
- Unsupervised representation learning [JOAO: TU Datasets]
- Transfer learning [JOAO: MoleculeNet and PPI] [GraphCL-LP: MoleculeNet and PPI]
If you use this code for you research, please cite our paper.
@article{you2021graph,
title={Graph Contrastive Learning Automated},
author={You, Yuning and Chen, Tianlong and Shen, Yang and Wang, Zhangyang},
journal={arXiv preprint arXiv:2106.07594},
year={2021}
}
@misc{you2022bringing,
title={Bringing Your Own View: Graph Contrastive Learning without Prefabricated Data Augmentations},
author={Yuning You and Tianlong Chen and Zhangyang Wang and Yang Shen},
year={2022},
eprint={2201.01702},
archivePrefix={arXiv},
primaryClass={cs.LG}
}