• Stars
    star
    1,311
  • Rank 35,893 (Top 0.8 %)
  • Language
    Python
  • License
    Apache License 2.0
  • Created almost 6 years ago
  • Updated over 3 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

disentanglement_lib is an open-source library for research on learning disentangled representations.

disentanglement_lib

Sample visualization

disentanglement_lib is an open-source library for research on learning disentangled representation. It supports a variety of different models, metrics and data sets:

  • Models: BetaVAE, FactorVAE, BetaTCVAE, DIP-VAE
  • Metrics: BetaVAE score, FactorVAE score, Mutual Information Gap, SAP score, DCI, MCE, IRS, UDR
  • Data sets: dSprites, Color/Noisy/Scream-dSprites, SmallNORB, Cars3D, and Shapes3D
  • It also includes 10'800 pretrained disentanglement models (see below for details).

disentanglement_lib was created by Olivier Bachem and Francesco Locatello at Google Brain Zurich for the large-scale empirical study

Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations. Francesco Locatello, Stefan Bauer, Mario Lucic, Gunnar Rätsch, Sylvain Gelly, Bernhard Schölkopf, Olivier Bachem. ICML (Best Paper Award), 2019.

The code is tested with Python 3 and is meant to be run on Linux systems (such as a Google Cloud Deep Learning VM). It uses TensorFlow, Scipy, Numpy, Scikit-Learn, TFHub and Gin.

How does it work?

disentanglement_lib consists of several different steps:

  • Model training: Trains a TensorFlow model and saves trained model in a TFHub module.
  • Postprocessing: Takes a trained model, extracts a representation (e.g. by using the mean of the Gaussian encoder) and saves the representation function in a TFHub module.
  • Evaluation: Takes a representation function and computes a disentanglement metric.
  • Visualization: Takes a trained model and visualizes it.

All configuration details and experimental results of the different steps are saved and propagated along the steps (see below for a description). At the end, they can be aggregated in a single JSON file and analyzed with Pandas.

Usage

Installing disentanglement_lib

First, clone this repository with

git clone https://github.com/google-research/disentanglement_lib.git

Then, navigate to the repository (with cd disentanglement_lib) and run

pip install .[tf_gpu]

(or pip install .[tf] for TensorFlow without GPU support). This should install the package and all the required dependencies. To verify that everything works, simply run the test suite with

dlib_tests

Downloading the data sets

To download the data required for training the models, navigate to any folder and run

dlib_download_data

which will install all the required data files (except for Shapes3D which is not publicly released) in the current working directory. For convenience, we recommend to set the environment variable DISENTANGLEMENT_LIB_DATA to this path, for example by adding

export DISENTANGLEMENT_LIB_DATA=<path to the data directory>

to your .bashrc file. If you choose not to set the environment variable DISENTANGLEMENT_LIB_DATA, disentanglement_lib will always look for the data in your current folder.

Reproducing prior experiments

To fully train and evaluate one of the 12'600 models in the paper Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations, simply run

dlib_reproduce --model_num=<?>

where <?> should be replaced with a model index between 0 and 12'599 which corresponds to the ID of which model to train. This will take a couple of hours and add a folder output/<?> which contains the trained model (including checkpoints and TFHub modules), the experimental results (in JSON format) and visualizations (including GIFs). To only print the configuration of that model instead of training, add the flag --only_print.

After having trained several of these models, you can aggregate the results by running the following command (in the same folder)

dlib_aggregate_results

which creates a results.json file with all the aggregated results.

Running different configurations

Internally, disentanglement_lib uses gin to configure hyperparameters and other settings. To train one of the provided models but with different hyperparameters, you need to write a gin config such as examples/model.gin. Then, you may use the following command

dlib_train --gin_config=examples/model.gin --model_dir=<model_output_directory>

to train the model where --model_dir specifies where the results should be saved.

To evaluate the newly trained model consistent with the evaluation protocol in the paper Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations, simply run

dlib_reproduce --model_dir=<model_output_directory> --output_directory=<output>

Similarly, you might also want to look at dlib_postprocess and dlib_evaluate if you want to customize how representations are extracted and evaluated.

Starting your own research

disentanglement_lib is easily extendible and can be used to implement new models and metrics related to disentangled representations. To get started, simply go through examples/example.py which shows you how to create your own disentanglement model and metric and how to benchmark them against existing models and metrics.

Pretrained disentanglement_lib modules

Reproducing all the 12'600 models in the study Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations requires a substantial computational effort. To foster further research, disentanglement_lib includes 10'800 pretrained disentanglement_lib modules that correspond to the results of running dlib_reproduce with --model_num=<?> between 0 and 10'799 (the other models correspond to Shapes3D which is not publicly available). Each disentanglement_lib module contains the trained model (in the form of a TFHub module), the extracted representations (also as TFHub modules) and the recorded experimental results such as the different disentanglement scores (in JSON format). This makes it easy to compare new models to the pretrained ones and to compute new disentanglement metrics on the set of pretrained models.

To access the 10'800 pretrained disentanglement_lib modules, you may download individual ones using the following link:

https://storage.googleapis.com/disentanglement_lib/unsupervised_study_v1/<?>.zip

where <?> corresponds to a model index between 0 and 10'799 (example).

Each ZIP file in the bucket corresponds to one run of dlib_reproduce with that model number. To learn more about the used configuration settings, look at the code in disentanglement_lib/config/unsupervised_study_v1/sweep.py or run:

dlib_reproduce --model_num=<?> --only_print

Frequently asked questions

How do I make pretty GIFs of my models?

If you run dlib_reproduce, they are automatically saved to the visualizations subfolder in your output directory. Otherwise, you can use the script dlib_visualize_dataset to generate them or call the function visualize(...) in disentanglement_lib/visualize/visualize_model.py.

How are results and models saved?

After each of the main steps (training/postprocessing/evaluation), an output directory is created. For all steps, there is a results folder which contains all the configuration settings and experimental results up to that step. The gin subfolder contains the operative gin config for each step in the gin format. The json subfolder contains files with the operative gin config and the experimental results of that step but in JSON format. Finally, the aggregate subfolder contains aggregated JSON files where each file contains both the configs and results from all preceding steps.

The training step further saves the TensorFlow checkpoint (in a tf_checkpoint subfolder) and the trained model as a TFHub module (in a tfhub subfolder). Similarly, the postprocessing step saves the representation function as a TFHub module (in a tfhub subfolder). If you run dlib_reproduce, it will create subfolders for all the different substeps that you ran. In particular, it will create an output directory for each metric that you computed.

How do I access the results?

To access the results, first aggregate all the results using dlib_aggregate_results by specifying a glob pattern that captures all the results files. For example, after training a couple of different models with dlib_reproduce, you would specify

dlib_aggregate --output_path=<...>.json \
  --result_file_pattern=<...>/*/metrics/*/*/results/aggregate/evaluation.json

The first * in the glob pattern would capture the different models, the second * different representations and the last * the different metrics. Finally, you may access the aggregated results with:

from disentanglement_lib.utils import aggregate_results
df = aggregate_results.load_aggregated_json_results(output_path)

Where to look in the code?

The following provides a guide to the overall code structure:

(1) Training step:

  • disentanglement_lib/methods/unsupervised: Contains the training protocol (train.py) and all the model functions for training the methods (vae.py). The methods all inherit from the GaussianEncoderModel class.
  • disentanglement_lib/methods/shared: Contains shared architectures, losses, and optimizers used in the different models.

(2) Postprocessing step:

  • disentanglement_lib/postprocess: Contains the postprocessing pipeline (postprocess.py) and the two extraction methods (methods.py).

(3) Evaluation step:

  • disentanglement_lib/evaluation: Contains the evaluation protocol (evaluate.py).

  • disentanglement_lib/evaluation/metrics: Contains implementation of the different disentanglement metrics.

Hyperparameters and configuration files:

  • disentanglement_lib/config/unsupervised_study_v1: Contains the gin configuration files (*.gin) for the different steps as well as the hyperparameter sweep (sweep.py) for the experiments in the paper Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations.

Shared functionality:

  • bin: Scripts to run the different pipelines, visualize the data sets as well as the models and aggregate the results.

  • disentanglement_lib/data/ground_truth: Contains all the scripts used to generate the data. All the datasets (in named_data.py) are instances of the class GroundTruthData}.

  • disentanglement_lib/utils: Contains helper functions to aggregate and save the results of the pipeline as well as the trained models.

  • disentanglement_lib/visualize: Contains visualization functions for the datasets and the trained models.

NeurIPS 2019 Disentanglement Challenge

The library is also used for the NeurIPS 2019 Disentanglement challenge. The challenge consists of three different datasets.

  1. Simplistic rendered images (mpi3d_toy)
  2. Realistic rendered images (mpi3d_realistic): not yet published
  3. Real world images (mpi3d_real): not yet published

Currently, only the simplistic rendered dataset is publicly available and will be automatically downloaded by running the following command.

dlib_download_data

Other datasets will be made available at the later stages of the competition. For more information on the competition kindly visit the competition website. More information about the dataset can be found here or in the arXiv preprint On the Transfer of Inductive Bias from Simulation to the Real World: a New Disentanglement Dataset.

Abstract reasoning experiments

The library also includes the code used for the experiments of the following paper in the disentanglement_lib/evaluation/abstract_reasoning subdirectory:

Are Disentangled Representations Helpful for Abstract Visual Reasoning? Sjoerd van Steenkiste, Francesco Locatello, Jürgen Schmidhuber, Olivier Bachem. NeurIPS, 2019.

The experimental protocol consists of two parts: First, to train the disentanglement models, one may use the the standard replication pipeline (dlib_reproduce), for example via the following command:

dlib_reproduce --model_num=<?> --study=abstract_reasoning_study_v1

where <?> should be replaced with a model index between 0 and 359 which corresponds to the ID of which model to train.

Second, to train the abstract reasoning models, one can use the automatically installed pipeline dlib_reason. To configure the model, copy and modify disentanglement_lib/config/abstract_reasoning_study_v1/stage2/example.gin as needed. Then, use the following command to train and evaluate an abstract reasoning model:

dlib_reason --gin_config=<?> --input_dir=<?> --output_dir=<?>

The results can then be found in the results subdirectory of the output directory.

Fairness experiments

The library also includes the code used for the experiments of the following paper in disentanglement_lib/evaluation/metrics/fairness.py:

On the Fairness of Disentangled Representations Francesco Locatello, Gabriele Abbati, Tom Rainforth, Stefan Bauer, Bernhard Schoelkopf, Olivier Bachem. NeurIPS, 2019.

To train and evaluate all the models, simply use the following command:

dlib_reproduce --model_num=<?> --study=fairness_study_v1

where <?> should be replaced with a model index between 0 and 12'599 which corresponds to the ID of which model to train.

If you only want to reevaluate an already trained model using the evaluation protocol of the paper, you may use the following command:

dlib_reproduce --model_dir=<model_output_directory> --output_directory=<output> --study=fairness_study_v1

UDR experiments

The library also includes the code for the Unsupervised Disentanglement Ranking (UDR) method proposed in the following paper in disentanglement_lib/bin/dlib_udr:

Unsupervised Model Selection for Variational Disentangled Representation Learning Sunny Duan, Loic Matthey, Andre Saraiva, Nicholas Watters, Christopher P. Burgess, Alexander Lerchner, Irina Higgins.

UDR can be applied to newly trained models (e.g. obtained by running dlib_reproduce) or to the existing pretrained models. After the models have been trained, their UDR scores can be computed by running:

dlib_udr --model_dirs=<model_output_directory1>,<model_output_directory2> \
  --output_directory=<output>

The scores will be exported to <output>/results/aggregate/evaluation.json under the model_scores attribute. The scores will be presented in the order of the input model directories.

Weakly-Supervised experiments

The library also includes the code for the weakly-supervised disentanglement methods proposed in the following paper in disentanglement_lib/bin/dlib_reproduce_weakly_supervised:

Weakly-Supervised Disentanglement Without Compromises Francesco Locatello, Ben Poole, Gunnar Rätsch, Bernhard Schölkopf, Olivier Bachem, Michael Tschannen.

dlib_reproduce_weakly_supervised --output_directory=<output> \
   --gin_model_config_dir=<dir> \
   --gin_model_config_name=<name> \
   --gin_postprocess_config_glob=<postprocess_configs> \
   --gin_evaluation_config_glob=<eval_configs> \
   --pipeline_seed=<seed>

Semi-Supervised experiments

The library also includes the code for the semi-supervised disentanglement methods proposed in the following paper in disentanglement_lib/bin/dlib_reproduce_semi_supervised:

Disentangling Factors of Variation Using Few Labels Francesco Locatello, Michael Tschannen, Stefan Bauer, Gunnar Rätsch, Bernhard Schölkopf, Olivier Bachem.

dlib_reproduce_weakly_supervised --output_directory=<output> \
   --gin_model_config_dir=<dir> \
   --gin_model_config_name=<name> \
   --gin_postprocess_config_glob=<postprocess_configs> \
   --gin_evaluation_config_glob=<eval_configs> \
   --gin_validation_config_glob=<val_configs> \
   --pipeline_seed=<seed> \
   --eval_seed=<seed> \
   --supervised_seed=<seed> \
   --num_labelled_samples=<num> \
   --train_percentage=0.9 \
   --labeller_fn="@perfect_labeller"

Feedback

Please send any feedback to [email protected] and [email protected].

Citation

If you use disentanglement_lib, please consider citing:

@inproceedings{locatello2019challenging,
  title={Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations},
  author={Locatello, Francesco and Bauer, Stefan and Lucic, Mario and Raetsch, Gunnar and Gelly, Sylvain and Sch{\"o}lkopf, Bernhard and Bachem, Olivier},
  booktitle={International Conference on Machine Learning},
  pages={4114--4124},
  year={2019}
}

This is not an officially supported Google product.

More Repositories

1

bert

TensorFlow code and pre-trained models for BERT
Python
37,769
star
2

google-research

Google Research
Jupyter Notebook
33,759
star
3

tuning_playbook

A playbook for systematically maximizing the performance of deep learning models.
26,593
star
4

vision_transformer

Jupyter Notebook
10,251
star
5

text-to-text-transfer-transformer

Code for the paper "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer"
Python
6,099
star
6

arxiv-latex-cleaner

arXiv LaTeX Cleaner: Easily clean the LaTeX code of your paper to submit to arXiv
Python
5,233
star
7

simclr

SimCLRv2 - Big Self-Supervised Models are Strong Semi-Supervised Learners
Jupyter Notebook
3,937
star
8

multinerf

A Code Release for Mip-NeRF 360, Ref-NeRF, and RawNeRF
Python
3,612
star
9

timesfm

TimesFM (Time Series Foundation Model) is a pretrained time-series foundation model developed by Google Research for time-series forecasting.
Python
3,576
star
10

scenic

Scenic: A Jax Library for Computer Vision Research and Beyond
Python
3,295
star
11

football

Check out the new game server:
Python
3,260
star
12

albert

ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
Python
3,209
star
13

frame-interpolation

FILM: Frame Interpolation for Large Motion, In ECCV 2022.
Python
2,818
star
14

t5x

Python
2,656
star
15

electra

ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators
Python
2,325
star
16

kubric

A data generation pipeline for creating semi-realistic synthetic multi-object videos with rich annotations such as instance segmentation masks, depth maps, and optical flow.
Jupyter Notebook
2,312
star
17

big_vision

Official codebase used to develop Vision Transformer, SigLIP, MLP-Mixer, LiT and more.
Jupyter Notebook
2,219
star
18

uda

Unsupervised Data Augmentation (UDA)
Python
2,131
star
19

language

Shared repository for open-sourced projects from the Google AI Language team.
Python
1,605
star
20

pegasus

Python
1,600
star
21

dex-lang

Research language for array processing in the Haskell/ML family
Haskell
1,581
star
22

torchsde

Differentiable SDE solvers with GPU support and efficient sensitivity analysis.
Python
1,548
star
23

parti

1,538
star
24

big_transfer

Official repository for the "Big Transfer (BiT): General Visual Representation Learning" paper.
Python
1,504
star
25

FLAN

Python
1,460
star
26

robotics_transformer

Python
1,337
star
27

multilingual-t5

Python
1,197
star
28

circuit_training

Python
1,151
star
29

tapas

End-to-end neural table-text understanding models.
Python
1,143
star
30

planet

Learning Latent Dynamics for Planning from Pixels
Python
1,134
star
31

mixmatch

Python
1,130
star
32

deduplicate-text-datasets

Rust
1,104
star
33

fixmatch

A simple method to perform semi-supervised learning with limited data.
Python
1,094
star
34

morph-net

Fast & Simple Resource-Constrained Learning of Deep Network Structure
Python
1,016
star
35

maxim

[CVPR 2022 Oral] Official repository for "MAXIM: Multi-Axis MLP for Image Processing". SOTA for denoising, deblurring, deraining, dehazing, and enhancement.
Python
996
star
36

deeplab2

DeepLab2 is a TensorFlow library for deep labeling, aiming to provide a unified and state-of-the-art TensorFlow codebase for dense pixel labeling tasks.
Python
995
star
37

batch-ppo

Efficient Batched Reinforcement Learning in TensorFlow
Python
963
star
38

augmix

AugMix: A Simple Data Processing Method to Improve Robustness and Uncertainty
Python
951
star
39

magvit

Official JAX implementation of MAGVIT: Masked Generative Video Transformer
Python
947
star
40

pix2seq

Pix2Seq codebase: multi-tasks with generative modeling (autoregressive and diffusion)
Jupyter Notebook
865
star
41

seed_rl

SEED RL: Scalable and Efficient Deep-RL with Accelerated Central Inference. Implements IMPALA and R2D2 algorithms in TF2 with SEED's architecture.
Python
793
star
42

meta-dataset

A dataset of datasets for learning to learn from few examples
Jupyter Notebook
762
star
43

noisystudent

Code for Noisy Student Training. https://arxiv.org/abs/1911.04252
Python
751
star
44

rliable

[NeurIPS'21 Outstanding Paper] Library for reliable evaluation on RL and ML benchmarks, even with only a handful of seeds.
Jupyter Notebook
747
star
45

recsim

A Configurable Recommender Systems Simulation Platform
Python
739
star
46

jax3d

Python
733
star
47

long-range-arena

Long Range Arena for Benchmarking Efficient Transformers
Python
719
star
48

lottery-ticket-hypothesis

A reimplementation of "The Lottery Ticket Hypothesis" (Frankle and Carbin) on MNIST.
Python
706
star
49

federated

A collection of Google research projects related to Federated Learning and Federated Analytics.
Python
675
star
50

bleurt

BLEURT is a metric for Natural Language Generation based on transfer learning.
Python
651
star
51

prompt-tuning

Original Implementation of Prompt Tuning from Lester, et al, 2021
Python
642
star
52

nasbench

NASBench: A Neural Architecture Search Dataset and Benchmark
Python
641
star
53

neuralgcm

Hybrid ML + physics model of the Earth's atmosphere
Python
641
star
54

xtreme

XTREME is a benchmark for the evaluation of the cross-lingual generalization ability of pre-trained multilingual models that covers 40 typologically diverse languages and includes nine tasks.
Python
631
star
55

lasertagger

Python
606
star
56

sound-separation

Python
603
star
57

pix2struct

Python
587
star
58

vmoe

Jupyter Notebook
569
star
59

dreamer

Dream to Control: Learning Behaviors by Latent Imagination
Python
568
star
60

robopianist

[CoRL '23] Dexterous piano playing with deep reinforcement learning.
Python
562
star
61

omniglue

Code release for CVPR'24 submission 'OmniGlue'
Python
561
star
62

fast-soft-sort

Fast Differentiable Sorting and Ranking
Python
561
star
63

ravens

Train robotic agents to learn pick and place with deep learning for vision-based manipulation in PyBullet. Transporter Nets, CoRL 2020.
Python
560
star
64

sam

Python
551
star
65

batch_rl

Offline Reinforcement Learning (aka Batch Reinforcement Learning) on Atari 2600 games
Python
521
star
66

bigbird

Transformers for Longer Sequences
Python
518
star
67

tensor2robot

Distributed machine learning infrastructure for large-scale robotics research
Python
483
star
68

byt5

Python
477
star
69

adapter-bert

Python
476
star
70

mint

Multi-modal Content Creation Model Training Infrastructure including the FACT model (AI Choreographer) implementation.
Python
465
star
71

leaf-audio

LEAF is a learnable alternative to audio features such as mel-filterbanks, that can be initialized as an approximation of mel-filterbanks, and then be trained for the task at hand, while using a very small number of parameters.
Python
446
star
72

robustness_metrics

Jupyter Notebook
442
star
73

maxvit

[ECCV 2022] Official repository for "MaxViT: Multi-Axis Vision Transformer". SOTA foundation models for classification, detection, segmentation, image quality, and generative modeling...
Jupyter Notebook
436
star
74

receptive_field

Compute receptive fields of your favorite convnets
Python
434
star
75

maskgit

Official Jax Implementation of MaskGIT
Jupyter Notebook
429
star
76

weatherbench2

A benchmark for the next generation of data-driven global weather models.
Python
420
star
77

l2p

Learning to Prompt (L2P) for Continual Learning @ CVPR22 and DualPrompt: Complementary Prompting for Rehearsal-free Continual Learning @ ECCV22
Python
408
star
78

distilling-step-by-step

Python
407
star
79

ssl_detection

Semi-supervised learning for object detection
Python
398
star
80

nerf-from-image

Shape, Pose, and Appearance from a Single Image via Bootstrapped Radiance Field Inversion
Python
377
star
81

computation-thru-dynamics

Understanding computation in artificial and biological recurrent networks through the lens of dynamical systems.
Jupyter Notebook
369
star
82

tf-slim

Python
368
star
83

realworldrl_suite

Real-World RL Benchmark Suite
Python
341
star
84

python-graphs

A static analysis library for computing graph representations of Python programs suitable for use with graph neural networks.
Python
325
star
85

rigl

End-to-end training of sparse deep neural networks with little-to-no performance loss.
Python
314
star
86

task_adaptation

Python
310
star
87

self-organising-systems

Jupyter Notebook
308
star
88

ibc

Official implementation of Implicit Behavioral Cloning, as described in our CoRL 2021 paper, see more at https://implicitbc.github.io/
Python
306
star
89

tensorflow_constrained_optimization

Python
300
star
90

syn-rep-learn

Learning from synthetic data - code and models
Python
294
star
91

arco-era5

Recipes for reproducing Analysis-Ready & Cloud Optimized (ARCO) ERA5 datasets.
Python
291
star
92

vdm

Jupyter Notebook
291
star
93

rlds

Jupyter Notebook
284
star
94

exoplanet-ml

Machine learning models and utilities for exoplanet science.
Python
283
star
95

retvec

RETVec is an efficient, multilingual, and adversarially-robust text vectorizer.
Jupyter Notebook
281
star
96

sparf

This is the official code release for SPARF: Neural Radiance Fields from Sparse and Noisy Poses [CVPR 2023-Highlight]
Python
279
star
97

tensorflow-coder

Python
275
star
98

lm-extraction-benchmark

Python
270
star
99

language-table

Suite of human-collected datasets and a multi-task continuous control benchmark for open vocabulary visuolinguomotor learning.
Jupyter Notebook
260
star
100

falken

Falken provides developers with a service that allows them to train AI that can play their games
Python
254
star