• Stars
    star
    207
  • Rank 189,769 (Top 4 %)
  • Language
    Python
  • License
    BSD 2-Clause "Sim...
  • Created about 4 years ago
  • Updated about 3 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Official Implementation of "The Hessian Penalty: A Weak Prior for Unsupervised Disentanglement" (ECCV 2020 Spotlight)

The Hessian Penalty β€” Official Implementation

Paper | Project Page | ECCV 2020 Spotlight Video | The Hessian Penalty in 90 Seconds

Home | PyTorch BigGAN Discovery | TensorFlow ProGAN Regularization

Teaser image

This repo contains code for our new regularization term that encourages disentanglement in neural networks. It efficiently optimizes the Hessian of your neural network to be diagonal in an input, leading to disentanglement in that input. We showcase its usage in generative adversarial networks (GANs), but you can use our code for other neural networks too.

The Hessian Penalty: A Weak Prior for Unsupervised Disentanglement
William Peebles, John Peebles, Jun-Yan Zhu, Alexei Efros, Antonio Torralba
UC Berkeley, Yale, Adobe Research, MIT CSAIL
ECCV 2020 (Spotlight)

This repo contains the following:

Below are some examples of latent space directions and components learned via the Hessian Penalty:

Dogs teaser Dogs background teaser Dogs lighting teaser Church teaser CLEVR teaser

Adding the Hessian Penalty to Your Code

We provide portable implementations of the Hessian Penalty that you can easily add to your projects.

Adding the Hessian Penalty to your own code is very simple:

from hessian_penalty_pytorch import hessian_penalty

net = MyNeuralNet()
input = sample_input()
loss = hessian_penalty(G=net, z=input)
loss.backward()

See our Tips and Tricks for some advice about training with the Hessian Penalty and avoiding pitfalls. Our code supports regularizing multiple activations simultaneously; see the fourth bullet point in Tips and Tricks for how to enable this feature.

Getting Started

This section and below are only needed if you want to visualize/evaluate/train with our code and models. For using the Hessian Penalty in your own code, you can copy one of the files mentioned in the above section.

Both the TensorFlow and PyTorch codebases are tested with Linux on NVIDIA GPUs. You need at least Python 3.6. To get started, download this repo:

git clone [email protected]:wpeebles/hessian_penalty.git
cd hessian_penalty

Then, set-up your environment. You can use the environment.yml file to set-up a Conda environment:

conda env create -f environment.yml
conda activate hessian

If you opt to use your environment, we recommend using TensorFlow 1.14.0 and PyTorch >= 1.6.0. Now you're all set-up.

TensorFlow ProgressiveGAN Regularization Experiments

PyTorch BigGAN Direction Discovery Experiments

Citation

If our code aided your research, please cite our paper:

@inproceedings{peebles2020hessian,
  title={The Hessian Penalty: A Weak Prior for Unsupervised Disentanglement},
  author={Peebles, William and Peebles, John and Zhu, Jun-Yan and Efros, Alexei A. and Torralba, Antonio},
  booktitle={Proceedings of European Conference on Computer Vision (ECCV)},
  year={2020}
}

Acknowledgments

We thank Pieter Abbeel, Taesung Park, Richard Zhang, Mathieu Aubry, Ilija Radosavovic, Tim Brooks, Karttikeya Mangalam, and all of BAIR for valuable discussions and encouragement. This work was supported, in part, by grants from SAP, Adobe, and Berkeley DeepDrive.