• Stars
    star
    556
  • Rank 78,220 (Top 2 %)
  • Language
    Python
  • Created over 4 years ago
  • Updated about 2 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Hyperbolic Graph Convolutional Networks in PyTorch.

Hyperbolic Graph Convolutional Networks in PyTorch

1. Overview

This repository is a graph representation learning library, containing an implementation of Hyperbolic Graph Convolutions [1] in PyTorch, as well as multiple embedding approaches including:

Shallow methods (Shallow)

  • Shallow Euclidean
  • Shallow Hyperbolic [2]
  • Shallow Euclidean + Features (see [1])
  • Shallow Hyperbolic + Features (see [1])

Neural Network (NN) methods

  • Multi-Layer Perceptron (MLP)
  • Hyperbolic Neural Networks (HNN) [3]

Graph Neural Network (GNN) methods

  • Graph Convolutional Neural Networks (GCN) [4]
  • Graph Attention Networks (GAT) [5]
  • Hyperbolic Graph Convolutions (HGCN) [1]

All models can be trained for

  • Link prediction (lp)
  • Node classification (nc)

2. Setup

2.1 Installation with conda

If you don't have conda installed, please install it following the instructions here.

git clone https://github.com/HazyResearch/hgcn

cd hgcn

conda env create -f environment.yml

2.2 Installation with pip

Alternatively, if you prefer to install dependencies with pip, please follow the instructions below:

virtualenv -p [PATH to python3.7 binary] hgcn

source hgcn/bin/activate

pip install -r requirements.txt

2.3 Datasets

The data/ folder contains source files for:

  • Cora
  • Pubmed
  • Disease
  • Airport

To run this code on new datasets, please add corresponding data processing and loading in load_data_nc and load_data_lp functions in utils/data_utils.py.

3. Usage

3.1 set_env.sh

Before training, run

source set_env.sh

This will create environment variables that are used in the code.

3.2 train.py

This script trains models for link prediction and node classification tasks. Metrics are printed at the end of training or can be saved in a directory by adding the command line argument --save=1.

optional arguments:
  -h, --help            show this help message and exit
  --lr LR               learning rate
  --dropout DROPOUT     dropout probability
  --cuda CUDA           which cuda device to use (-1 for cpu training)
  --epochs EPOCHS       maximum number of epochs to train for
  --weight-decay WEIGHT_DECAY
                        l2 regularization strength
  --optimizer OPTIMIZER
                        which optimizer to use, can be any of [Adam,
                        RiemannianAdam]
  --momentum MOMENTUM   momentum in optimizer
  --patience PATIENCE   patience for early stopping
  --seed SEED           seed for training
  --log-freq LOG_FREQ   how often to compute print train/val metrics (in
                        epochs)
  --eval-freq EVAL_FREQ
                        how often to compute val metrics (in epochs)
  --save SAVE           1 to save model and logs and 0 otherwise
  --save-dir SAVE_DIR   path to save training logs and model weights (defaults
                        to logs/task/date/run/)
  --sweep-c SWEEP_C
  --lr-reduce-freq LR_REDUCE_FREQ
                        reduce lr every lr-reduce-freq or None to keep lr
                        constant
  --gamma GAMMA         gamma for lr scheduler
  --print-epoch PRINT_EPOCH
  --grad-clip GRAD_CLIP
                        max norm for gradient clipping, or None for no
                        gradient clipping
  --min-epochs MIN_EPOCHS
                        do not early stop before min-epochs
  --task TASK           which tasks to train on, can be any of [lp, nc]
  --model MODEL         which encoder to use, can be any of [Shallow, MLP,
                        HNN, GCN, GAT, HGCN]
  --dim DIM             embedding dimension
  --manifold MANIFOLD   which manifold to use, can be any of [Euclidean,
                        Hyperboloid, PoincareBall]
  --c C                 hyperbolic radius, set to None for trainable curvature
  --r R                 fermi-dirac decoder parameter for lp
  --t T                 fermi-dirac decoder parameter for lp
  --pretrained-embeddings PRETRAINED_EMBEDDINGS
                        path to pretrained embeddings (.npy file) for Shallow
                        node classification
  --pos-weight POS_WEIGHT
                        whether to upweight positive class in node
                        classification tasks
  --num-layers NUM_LAYERS
                        number of hidden layers in encoder
  --bias BIAS           whether to use bias (1) or not (0)
  --act ACT             which activation function to use (or None for no
                        activation)
  --n-heads N_HEADS     number of attention heads for graph attention
                        networks, must be a divisor dim
  --alpha ALPHA         alpha for leakyrelu in graph attention networks
  --use-att USE_ATT     whether to use hyperbolic attention in HGCN model
  --double-precision DOUBLE_PRECISION
                        whether to use double precision
  --dataset DATASET     which dataset to use
  --val-prop VAL_PROP   proportion of validation edges for link prediction
  --test-prop TEST_PROP
                        proportion of test edges for link prediction
  --use-feats USE_FEATS
                        whether to use node features or not
  --normalize-feats NORMALIZE_FEATS
                        whether to normalize input node features
  --normalize-adj NORMALIZE_ADJ
                        whether to row-normalize the adjacency matrix
  --split-seed SPLIT_SEED
                        seed for data splits (train/test/val)

4. Examples

We provide examples of training commands used to train HGCN and other graph embedding models for link prediction and node classification. In the examples below, we used a fixed random seed set to 1234 for reproducibility purposes. Note that results might slightly vary based on the machine used. To reproduce results in the paper, run each commad for 10 random seeds and average the results.

4.1 Training HGCN

Link prediction

  • Cora (Test ROC-AUC=93.79):

python train.py --task lp --dataset cora --model HGCN --lr 0.01 --dim 16 --num-layers 2 --act relu --bias 1 --dropout 0.5 --weight-decay 0.001 --manifold PoincareBall --log-freq 5 --cuda 0 --c None

  • Pubmed (Test ROC-AUC: 95.17):

python train.py --task lp --dataset pubmed --model HGCN --lr 0.01 --dim 16 --num-layers 2 --act relu --bias 1 --dropout 0.4 --weight-decay 0.0001 --manifold PoincareBall --log-freq 5 --cuda 0

  • Disease (Test ROC-AUC: 87.14):

python train.py --task lp --dataset disease_lp --model HGCN --lr 0.01 --dim 16 --num-layers 2 --num-layers 2 --act relu --bias 1 --dropout 0 --weight-decay 0 --manifold PoincareBall --normalize-feats 0 --log-freq 5

  • Airport (Test ROC-AUC=97.43):

python train.py --task lp --dataset airport --model HGCN --lr 0.01 --dim 16 --num-layers 2 --act relu --bias 1 --dropout 0.0 --weight-decay 0 --manifold PoincareBall --log-freq 5 --cuda 0 --c None

Node classification

  • Cora and Pubmed:

To train train a HGCN node classification model on Cora and Pubmed datasets, pre-train embeddings for link prediction as decribed in the previous section. Then train a MLP classifier using the pre-trained embeddings (embeddings.npy file saved in the save-dir directory). For instance for the Pubmed dataset:

python train.py --task nc --dataset pubmed --model Shallow --lr 0.01 --dim 16 --num-layers 2 --act relu --bias 1 --dropout 0.2 --weight-decay 0.0005 --manifold Euclidean --log-freq 5 --cuda 0 --use-feats 0 --pretrained-embeddings [PATH_TO_EMBEDDINGS]

  • Disease (Test accuracy: 76.77):

python train.py --task nc --dataset disease_nc --model HGCN --dim 16 --lr 0.01 --dim 16 --num-layers 2 --act relu --bias 1 --dropout 0 --weight-decay 0 --manifold PoincareBall --log-freq 5 --cuda 0

4.2 Train other graph embedding models

Link prediction on the Cora dataset

  • Shallow Euclidean (Test ROC-AUC=86.40):

python train.py --task lp --dataset cora --model Shallow --manifold Euclidean --lr 0.01 --weight-decay 0.0005 --dim 16 --num-layers 0 --use-feats 0 --dropout 0.2 --act None --bias 0 --optimizer Adam --cuda 0

  • Shallow Hyperbolic (Test ROC-AUC=85.97):

python train.py --task lp --dataset cora --model Shallow --manifold PoincareBall --lr 0.01 --weight-decay 0.0005 --dim 16 --num-layers 0 --use-feats 0 --dropout 0.2 --act None --bias 0 --optimizer RiemannianAdam --cuda 0

  • GCN (Test ROC-AUC=89.22):

python train.py --task lp --dataset cora --model GCN --lr 0.01 --dim 16 --num-layers 2 --act relu --bias 1 --dropout 0.2 --weight-decay 0 --manifold Euclidean --log-freq 5 --cuda 0

  • HNN (Test ROC-AUC=90.79):

python train.py --task lp --dataset cora --model HNN --lr 0.01 --dim 16 --num-layers 2 --act None --bias 1 --dropout 0.2 --weight-decay 0.001 --manifold PoincareBall --log-freq 5 --cuda 0 --c 1

Node classification on the Pubmed dataset

  • HNN (Test accuracy=68.20):

python train.py --task nc --dataset pubmed --model HNN --lr 0.01 --dim 16 --num-layers 2 --act None --bias 1 --dropout 0.5 --weight-decay 0 --manifold PoincareBall --log-freq 5 --cuda 0

  • MLP (Test accuracy=73.00):

python train.py --task nc --dataset pubmed --model MLP --lr 0.01 --dim 16 --num-layers 2 --act None --bias 0 --dropout 0.2 --weight-decay 0.001 --manifold Euclidean --log-freq 5 --cuda 0

  • GCN (Test accuracy=78.30):

python train.py --task nc --dataset pubmed --model GCN --lr 0.01 --dim 16 --num-layers 2 --act relu --bias 1 --dropout 0.7 --weight-decay 0.0005 --manifold Euclidean --log-freq 5 --cuda 0

  • GAT (Test accuracy=78.50):

python train.py --task nc --dataset pubmed --model GAT --lr 0.01 --dim 16 --num-layers 2 --act elu --bias 1 --dropout 0.5 --weight-decay 0.0005 --alpha 0.2 --n-heads 4 --manifold Euclidean --log-freq 5 --cuda 0

Citation

If you find this code useful, please cite the following paper:

@inproceedings{chami2019hyperbolic,
  title={Hyperbolic graph convolutional neural networks},
  author={Chami, Ines and Ying, Zhitao and R{\'e}, Christopher and Leskovec, Jure},
  booktitle={Advances in Neural Information Processing Systems},
  pages={4869--4880},
  year={2019}
}

Some of the code was forked from the following repositories

References

[1] Chami, I., Ying, R., Ré, C. and Leskovec, J. Hyperbolic Graph Convolutional Neural Networks. NIPS 2019.

[2] Nickel, M. and Kiela, D. Poincaré embeddings for learning hierarchical representations. NIPS 2017.

[3] Ganea, O., Bécigneul, G. and Hofmann, T. Hyperbolic neural networks. NIPS 2017.

[4] Kipf, T.N. and Welling, M. Semi-supervised classification with graph convolutional networks. ICLR 2017.

[5] Veličković, P., Cucurull, G., Casanova, A., Romero, A., Lio, P. and Bengio, Y. Graph attention networks. ICLR 2018.

More Repositories

1

flash-attention

Fast and memory-efficient exact attention
Python
3,673
star
2

deepdive

DeepDive
Shell
1,949
star
3

state-spaces

Sequence Modeling with Structured State Spaces
Jupyter Notebook
1,372
star
4

ThunderKittens

Tile primitives for speedy kernels
Cuda
1,324
star
5

data-centric-ai

Resources for Data Centric AI
TeX
1,070
star
6

safari

Convolutions for Sequence Modeling
Assembly
841
star
7

meerkat

Creative interactive views of any dataset.
Python
814
star
8

ama_prompting

Ask Me Anything language model prompting
Python
530
star
9

hyena-dna

Official implementation for HyenaDNA, a long-range genomic foundation model built with Hyena
Assembly
528
star
10

m2

Repo for "Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture"
Assembly
507
star
11

H3

Language Modeling with the H3 State Space Model
Assembly
493
star
12

evaporate

This repo contains data and code for the paper "Language Models Enable Simple Systems for Generating Structured Views of Heterogeneous Data Lakes"
Python
467
star
13

manifest

Prompt programming with FMs.
Python
437
star
14

metal

Snorkel MeTaL: A framework for training models with multi-task weak supervision
Python
420
star
15

pdftotree

🌲 A tool for converting PDF into hOCR with text, tables, and figures being recognized and preserved.
Python
403
star
16

fonduer

A knowledge base construction engine for richly formatted data
Python
403
star
17

hyperbolics

Hyperbolic Embeddings
Python
364
star
18

flyingsquid

More interactive weak supervision with FlyingSquid
Python
310
star
19

legalbench

An open science effort to benchmark legal reasoning in foundation models
Python
282
star
20

KGEmb

Hyperbolic Knowledge Graph embeddings.
Python
242
star
21

flash-fft-conv

FlashFFTConv: Efficient Convolutions for Long Sequences with Tensor Cores
C++
242
star
22

aisys-building-blocks

Building blocks for foundation models.
242
star
23

bootleg

Self-Supervision for Named Entity Disambiguation at the Tail
Python
211
star
24

HypHC

Hyperbolic Hierarchical Clustering.
Python
186
star
25

TART

TART: A plug-and-play Transformer module for task-agnostic reasoning
Python
184
star
26

based

Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"
Python
178
star
27

fly

Python
174
star
28

tanda

Learning to Compose Domain-Specific Transformations for Data Augmentation
Python
169
star
29

spacetime

Code for SpaceTime 🌌⏱️. Proposed in Effectively Modeling Time Series with Simple Discrete State Spaces, ICLR 2023.
Python
156
star
30

butterfly

Butterfly matrix multiplication in PyTorch
Python
154
star
31

zoology

Understand and test language model architectures on synthetic tasks.
Python
149
star
32

babble

A system for generating training labels via natural language explanations
Python
144
star
33

hippo-code

Python
139
star
34

EmptyHeaded

Your worst case is our best case.
C++
136
star
35

domino

Python
133
star
36

blocking-tutorial

C++
127
star
37

mindbender

Tools for iterative knowledge base development with DeepDive
CoffeeScript
116
star
38

reef

Automatically labeling training data
Jupyter Notebook
103
star
39

fonduer-tutorials

A collection of simple tutorials for using Fonduer
Jupyter Notebook
100
star
40

fm_data_tasks

Foundation Models for Data Tasks
Python
92
star
41

TreeStructure

Table Extraction Tool
Jupyter Notebook
90
star
42

epoxy

Interactive Model Iteration with Weak Supervision and Pre-Trained Embeddings
Python
76
star
43

CaffeConTroll

C++
75
star
44

HoroPCA

Hyperbolic PCA via Horospherical Projections
Python
65
star
45

structured-nets

Structured matrices for compressing neural networks
Python
64
star
46

hidden-stratification

Combating hidden stratification with GEORGE
Jupyter Notebook
60
star
47

eclair-agents

Jupyter Notebook
50
star
48

numbskull

Numba-based version of DimmWitted Gibbs sampler
Python
45
star
49

model-patching

Model Patching: Closing the Subgroup Performance Gap with Data Augmentation
Python
42
star
50

cs145-notebooks-2016

Public materials for the Fall 2016 offering of CS145
Jupyter Notebook
35
star
51

skill-it

Skill-It! A Data-Driven Skills Framework for Understanding and Training Language Models
Jupyter Notebook
34
star
52

mandoline

(ICML 2021) Mandoline: Model Evaluation under Distribution Shift
Python
30
star
53

mongoose

A Learnable LSH Framework for Efficient NN Training
Python
28
star
54

thanos-code

Code release for the paper Perfectly Balanced: Improving Transfer and Robustness of Supervised Contrastive Learning
Python
28
star
55

tuffy

Tuffy, a Markov Logic Network solver
Java
23
star
56

snorkel-superglue

Applying Snorkel to SuperGLUE
Jupyter Notebook
23
star
57

ukb-cardiac-mri

Weakly Supervised MRI Series Classification for the UK Biobank
Python
22
star
58

correct-n-contrast

Official code repository for Correct-N-Contrast
Python
20
star
59

ludwig-benchmarking-toolkit

Ludwig benchmark
Python
19
star
60

ddlog

Compiler for writing DeepDive applications in a Datalog-like language — ⚠️🚧🛑 REPO MOVED TO DEEPDIVE 👇🏿
Scala
19
star
61

augmentation_code

Reproducible code for Augmentation paper
Python
18
star
62

smallfry

Python
18
star
63

tabi

Code release for Type-Aware Bi-Encoders for Open-Domain Entity Retrieval
Python
18
star
64

lp_rffs

Low precision random Fourier features for kernel approximation
Python
17
star
65

sampler

DimmWitted Gibbs Sampler in C++ — ⚠️🚧🛑 REPO MOVED TO DEEPDIVE 👉🏿
C++
17
star
66

random_embedding

Python
16
star
67

snorkel-biocorpus

Python
16
star
68

bazaar

JavaScript
14
star
69

ddbiolib

DeepDive Biomedical Tools
Python
13
star
70

anchor-stability

A study of the downstream instability of word embeddings
Jupyter Notebook
12
star
71

Omnivore

Omnivore Optimizer and Distributed CcT
C++
12
star
72

dd-genomics

The Genomics DeepDive project
Python
11
star
73

embroid

Embroid: Unsupervised Prediction Smoothing Can Improve Few-Shot Classification
Jupyter Notebook
11
star
74

dimmwitted

C++
10
star
75

medical-ned-integration

Cross-domain data integration for named entity disambiguation in biomedical text
Python
10
star
76

torchhalp

Python
9
star
77

cross-modal-ws-demo

HTML
9
star
78

liger

Liger: Fusing Weak Supervision and Model Embeddings
Python
8
star
79

treedlib

Jupyter Notebook
8
star
80

Accelerated-PCA

Accelerated Stochastic Power Iteration with Momentum
Jupyter Notebook
8
star
81

hyperE

HTML
7
star
82

chinstrap

C++
6
star
83

ivy-tutorial

An Introductory Tutorial for Ivy
Jupyter Notebook
6
star
84

quadrature-features

Code to generate kernel features using Gaussian quadrature
Python
5
star
85

icij-maude

Weakly supervised classification of adverse event reports from the FDA's MAUDE database.
Python
5
star
86

observational

Observational Supervision for Medical Image Classification using Gaze Data
Jupyter Notebook
5
star
87

librarian

DeepDive Librarian for managing all data sets we publish and receive
Python
3
star
88

halp

Python
3
star
89

bert-pretraining

Python
2
star
90

d3m-model-search

D3M Model Search Component
Python
2
star
91

elementary

Data services and APIs
Python
1
star
92

dependency_model

Structure learning code from [ICML'19 paper](https://arxiv.org/abs/1903.05844)
Python
1
star