• Stars
    star
    986
  • Rank 44,793 (Top 1.0 %)
  • Language
    Python
  • License
    MIT License
  • Created about 5 years ago
  • Updated over 3 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

πŸ¦‹A PyTorch implementation of BigGAN with pretrained weights and conversion scripts.

PyTorch pretrained BigGAN

An op-for-op PyTorch reimplementation of DeepMind's BigGAN model with the pre-trained weights from DeepMind.

Introduction

This repository contains an op-for-op PyTorch reimplementation of DeepMind's BigGAN that was released with the paper Large Scale GAN Training for High Fidelity Natural Image Synthesis by Andrew Brock, Jeff Donahue and Karen Simonyan.

This PyTorch implementation of BigGAN is provided with the pretrained 128x128, 256x256 and 512x512 models by DeepMind. We also provide the scripts used to download and convert these models from the TensorFlow Hub models.

This reimplementation was done from the raw computation graph of the Tensorflow version and behave similarly to the TensorFlow version (variance of the output difference of the order of 1e-5).

This implementation currently only contains the generator as the weights of the discriminator were not released (although the structure of the discriminator is very similar to the generator so it could be added pretty easily. Tell me if you want to do a PR on that, I would be happy to help.)

Installation

This repo was tested on Python 3.6 and PyTorch 1.0.1

PyTorch pretrained BigGAN can be installed from pip as follows:

pip install pytorch-pretrained-biggan

If you simply want to play with the GAN this should be enough.

If you want to use the conversion scripts and the imagenet utilities, additional requirements are needed, in particular TensorFlow and NLTK. To install all the requirements please use the full_requirements.txt file:

git clone https://github.com/huggingface/pytorch-pretrained-BigGAN.git
cd pytorch-pretrained-BigGAN
pip install -r full_requirements.txt

Models

This repository provide direct and simple access to the pretrained "deep" versions of BigGAN for 128, 256 and 512 pixels resolutions as described in the associated publication. Here are some details on the models:

  • BigGAN-deep-128: a 50.4M parameters model generating 128x128 pixels images, the model dump weights 201 MB,
  • BigGAN-deep-256: a 55.9M parameters model generating 256x256 pixels images, the model dump weights 224 MB,
  • BigGAN-deep-512: a 56.2M parameters model generating 512x512 pixels images, the model dump weights 225 MB.

Please refer to Appendix B of the paper for details on the architectures.

All models comprise pre-computed batch norm statistics for 51 truncation values between 0 and 1 (see Appendix C.1 in the paper for details).

Usage

Here is a quick-start example using BigGAN with a pre-trained model.

See the doc section below for details on these classes and methods.

import torch
from pytorch_pretrained_biggan import (BigGAN, one_hot_from_names, truncated_noise_sample,
                                       save_as_images, display_in_terminal)

# OPTIONAL: if you want to have more information on what's happening, activate the logger as follows
import logging
logging.basicConfig(level=logging.INFO)

# Load pre-trained model tokenizer (vocabulary)
model = BigGAN.from_pretrained('biggan-deep-256')

# Prepare a input
truncation = 0.4
class_vector = one_hot_from_names(['soap bubble', 'coffee', 'mushroom'], batch_size=3)
noise_vector = truncated_noise_sample(truncation=truncation, batch_size=3)

# All in tensors
noise_vector = torch.from_numpy(noise_vector)
class_vector = torch.from_numpy(class_vector)

# If you have a GPU, put everything on cuda
noise_vector = noise_vector.to('cuda')
class_vector = class_vector.to('cuda')
model.to('cuda')

# Generate an image
with torch.no_grad():
    output = model(noise_vector, class_vector, truncation)

# If you have a GPU put back on CPU
output = output.to('cpu')

# If you have a sixtel compatible terminal you can display the images in the terminal
# (see https://github.com/saitoha/libsixel for details)
display_in_terminal(output)

# Save results as png images
save_as_images(output)

output_0 output_1 output_2

Doc

Loading DeepMind's pre-trained weights

To load one of DeepMind's pre-trained models, instantiate a BigGAN model with from_pretrained() as:

model = BigGAN.from_pretrained(PRE_TRAINED_MODEL_NAME_OR_PATH, cache_dir=None)

where

  • PRE_TRAINED_MODEL_NAME_OR_PATH is either:

    • the shortcut name of a Google AI's or OpenAI's pre-trained model selected in the list:

      • biggan-deep-128: 12-layer, 768-hidden, 12-heads, 110M parameters
      • biggan-deep-256: 24-layer, 1024-hidden, 16-heads, 340M parameters
      • biggan-deep-512: 12-layer, 768-hidden, 12-heads , 110M parameters
    • a path or url to a pretrained model archive containing:

      • config.json: a configuration file for the model, and
      • pytorch_model.bin a PyTorch dump of a pre-trained instance of BigGAN (saved with the usual torch.save()).

    If PRE_TRAINED_MODEL_NAME_OR_PATH is a shortcut name, the pre-trained weights will be downloaded from AWS S3 (see the links here) and stored in a cache folder to avoid future download (the cache folder can be found at ~/.pytorch_pretrained_biggan/).

  • cache_dir can be an optional path to a specific directory to download and cache the pre-trained model weights.

Configuration

BigGANConfig is a class to store and load BigGAN configurations. It's defined in config.py.

Here are some details on the attributes:

  • output_dim: output resolution of the GAN (128, 256 or 512) for the pre-trained models,
  • z_dim: size of the noise vector (128 for the pre-trained models).
  • class_embed_dim: size of the class embedding vectors (128 for the pre-trained models).
  • channel_width: size of each channel (128 for the pre-trained models).
  • num_classes: number of classes in the training dataset, like imagenet (1000 for the pre-trained models).
  • layers: A list of layers definition. Each definition for a layer is a triple of [up-sample in the layer ? (bool), number of input channels (int), number of output channels (int)]
  • attention_layer_position: Position of the self-attention layer in the layer hierarchy (8 for the pre-trained models).
  • eps: epsilon value to use for spectral and batch normalization layers (1e-4 for the pre-trained models).
  • n_stats: number of pre-computed statistics for the batch normalization layers associated to various truncation values between 0 and 1 (51 for the pre-trained models).

Model

BigGAN is a PyTorch model (torch.nn.Module) of BigGAN defined in model.py. This model comprises the class embeddings (a linear layer) and the generator with a series of convolutions and conditional batch norms. The discriminator is currently not implemented since pre-trained weights have not been released for it.

The inputs and output are identical to the TensorFlow model inputs and outputs.

We detail them here.

BigGAN takes as inputs:

  • z: a torch.FloatTensor of shape [batch_size, config.z_dim] with noise sampled from a truncated normal distribution, and
  • class_label: an optional torch.LongTensor of shape [batch_size, sequence_length] with the token types indices selected in [0, 1]. Type 0 corresponds to a sentence A and type 1 corresponds to a sentence B token (see BERT paper for more details).
  • truncation: a float between 0 (not comprised) and 1. The truncation of the truncated normal used for creating the noise vector. This truncation value is used to selecte between a set of pre-computed statistics (means and variances) for the batch norm layers.

BigGAN outputs an array of shape [batch_size, 3, resolution, resolution] where resolution is 128, 256 or 512 depending of the model:

Utilities: Images, Noise, Imagenet classes

We provide a few utility method to use the model. They are defined in utils.py.

Here are some details on these methods:

  • truncated_noise_sample(batch_size=1, dim_z=128, truncation=1., seed=None):

    Create a truncated noise vector.

    • Params:
      • batch_size: batch size.
      • dim_z: dimension of z
      • truncation: truncation value to use
      • seed: seed for the random generator
    • Output: array of shape (batch_size, dim_z)
  • convert_to_images(obj):

    Convert an output tensor from BigGAN in a list of images.

    • Params:
      • obj: tensor or numpy array of shape (batch_size, channels, height, width)
    • Output:
      • list of Pillow Images of size (height, width)
  • save_as_images(obj, file_name='output'):

    Convert and save an output tensor from BigGAN in a list of saved images.

    • Params:
      • obj: tensor or numpy array of shape (batch_size, channels, height, width)
      • file_name: path and beggingin of filename to save. Images will be saved as file_name_{image_number}.png
  • display_in_terminal(obj):

    Convert and display an output tensor from BigGAN in the terminal. This function use libsixel and will only work in a libsixel-compatible terminal. Please refer to https://github.com/saitoha/libsixel for more details.

    • Params:
      • obj: tensor or numpy array of shape (batch_size, channels, height, width)
      • file_name: path and beggingin of filename to save. Images will be saved as file_name_{image_number}.png
  • one_hot_from_int(int_or_list, batch_size=1):

    Create a one-hot vector from a class index or a list of class indices.

    • Params:
      • int_or_list: int, or list of int, of the imagenet classes (between 0 and 999)
      • batch_size: batch size.
        • If int_or_list is an int create a batch of identical classes.
        • If int_or_list is a list, we should have len(int_or_list) == batch_size
    • Output:
      • array of shape (batch_size, 1000)
  • one_hot_from_names(class_name, batch_size=1):

    Create a one-hot vector from the name of an imagenet class ('tennis ball', 'daisy', ...). We use NLTK's wordnet search to try to find the relevant synset of ImageNet and take the first one. If we can't find it direcly, we look at the hyponyms and hypernyms of the class name.

    • Params:
      • class_name: string containing the name of an imagenet object.
    • Output:
      • array of shape (batch_size, 1000)

Download and conversion scripts

Scripts to download and convert the TensorFlow models from TensorFlow Hub are provided in ./scripts.

The scripts can be used directly as:

./scripts/download_tf_hub_models.sh
./scripts/convert_tf_hub_models.sh

More Repositories

1

transformers

πŸ€— Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
Python
125,891
star
2

pytorch-image-models

PyTorch image models, scripts, pretrained weights -- ResNet, ResNeXT, EfficientNet, NFNet, Vision Transformer (ViT), MobileNet-V3/V2, RegNet, DPN, CSPNet, Swin Transformer, MaxViT, CoAtNet, ConvNeXt, and more
Python
28,073
star
3

diffusers

πŸ€— Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch and FLAX.
Python
22,776
star
4

datasets

πŸ€— The largest hub of ready-to-use datasets for ML models with fast, easy-to-use and efficient data manipulation tools
Python
17,530
star
5

peft

πŸ€— PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
Python
14,007
star
6

candle

Minimalist ML framework for Rust
Rust
12,686
star
7

tokenizers

πŸ’₯ Fast State-of-the-Art Tokenizers optimized for Research and Production
Rust
8,286
star
8

trl

Train transformer language models with reinforcement learning.
Python
8,181
star
9

text-generation-inference

Large Language Model Text Generation Inference
Python
7,240
star
10

accelerate

πŸš€ A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (including fp8), and easy-to-configure FSDP and DeepSpeed support
Python
7,008
star
11

chat-ui

Open source codebase powering the HuggingChat app
TypeScript
6,369
star
12

deep-rl-class

This repo contains the syllabus of the Hugging Face Deep Reinforcement Learning Course.
MDX
3,541
star
13

alignment-handbook

Robust recipes to align language models with human and AI preferences
Python
3,485
star
14

autotrain-advanced

πŸ€— AutoTrain Advanced
Python
3,283
star
15

diffusion-models-class

Materials for the Hugging Face Diffusion Models Course
Jupyter Notebook
3,126
star
16

notebooks

Notebooks using the Hugging Face libraries πŸ€—
Jupyter Notebook
3,114
star
17

distil-whisper

Distilled variant of Whisper for speech recognition. 6x faster, 50% smaller, within 1% word error rate.
Python
2,964
star
18

neuralcoref

✨Fast Coreference Resolution in spaCy with Neural Networks
C
2,806
star
19

knockknock

πŸšͺ✊Knock Knock: Get notified when your training ends with only two additional lines of code
Python
2,682
star
20

swift-coreml-diffusers

Swift app demonstrating Core ML Stable Diffusion
Swift
2,377
star
21

safetensors

Simple, safe way to store and distribute tensors
Python
2,347
star
22

optimum

πŸš€ Accelerate training and inference of πŸ€— Transformers and πŸ€— Diffusers with easy to use hardware optimization tools
Python
2,086
star
23

awesome-papers

Papers & presentation materials from Hugging Face's internal science day
1,996
star
24

blog

Public repo for HF blog posts
Jupyter Notebook
1,962
star
25

setfit

Efficient few-shot learning with Sentence Transformers
Jupyter Notebook
1,912
star
26

text-embeddings-inference

A blazing fast inference solution for text embeddings models
Rust
1,845
star
27

course

The Hugging Face course on Transformers
MDX
1,832
star
28

evaluate

πŸ€— Evaluate: A library for easily evaluating machine learning models and datasets.
Python
1,825
star
29

transfer-learning-conv-ai

πŸ¦„ State-of-the-Art Conversational AI with Transfer Learning
Python
1,654
star
30

swift-coreml-transformers

Swift Core ML 3 implementations of GPT-2, DistilGPT-2, BERT, and DistilBERT for Question answering. Other Transformers coming soon!
Swift
1,543
star
31

pytorch-openai-transformer-lm

πŸ₯A PyTorch implementation of OpenAI's finetuned transformer language model with a script to import the weights pre-trained by OpenAI
Python
1,464
star
32

cookbook

Open-source AI cookbook
Jupyter Notebook
1,357
star
33

huggingface_hub

All the open source things related to the Hugging Face Hub.
Python
1,311
star
34

Mongoku

πŸ”₯The Web-scale GUI for MongoDB
TypeScript
1,289
star
35

huggingface.js

Utilities to use the Hugging Face Hub API
TypeScript
1,193
star
36

hmtl

🌊HMTL: Hierarchical Multi-Task Learning - A State-of-the-Art neural network model for several NLP tasks based on PyTorch and AllenNLP
Python
1,185
star
37

gsplat.js

JavaScript Gaussian Splatting library.
TypeScript
1,114
star
38

llm-vscode

LLM powered development for VSCode
TypeScript
1,060
star
39

datatrove

Freeing data processing from scripting madness by providing a set of platform-agnostic customizable pipeline processing blocks.
Python
1,033
star
40

torchMoji

πŸ˜‡A pyTorch implementation of the DeepMoji model: state-of-the-art deep learning model for analyzing sentiment, emotion, sarcasm etc
Python
880
star
41

nanotron

Minimalistic large language model 3D-parallelism training
Python
810
star
42

naacl_transfer_learning_tutorial

Repository of code for the tutorial on Transfer Learning in NLP held at NAACL 2019 in Minneapolis, MN, USA
Python
718
star
43

awesome-huggingface

πŸ€— A list of wonderful open-source projects & applications integrated with Hugging Face libraries.
698
star
44

optimum-nvidia

Python
680
star
45

dataset-viewer

Lightweight web API for visualizing and exploring any dataset - computer vision, speech, text, and tabular - stored on the Hugging Face Hub
Python
614
star
46

transformers-bloom-inference

Fast Inference Solutions for BLOOM
Python
546
star
47

exporters

Export Hugging Face models to Core ML and TensorFlow Lite
Python
540
star
48

pytorch_block_sparse

Fast Block Sparse Matrices for Pytorch
C++
523
star
49

llm.nvim

LLM powered development for Neovim
Lua
507
star
50

swift-transformers

Swift Package to implement a transformers-like API in Swift
Swift
482
star
51

node-question-answering

Fast and production-ready question answering in Node.js
TypeScript
459
star
52

large_language_model_training_playbook

An open collection of implementation tips, tricks and resources for training large language models
Python
431
star
53

llm-ls

LSP server leveraging LLMs for code completion (and more?)
Rust
416
star
54

llm_training_handbook

An open collection of methodologies to help with successful training of large language models.
Python
385
star
55

swift-chat

Mac app to demonstrate swift-transformers
Swift
375
star
56

tflite-android-transformers

DistilBERT / GPT-2 for on-device inference thanks to TensorFlow Lite with Android demo apps
Java
368
star
57

community-events

Place where folks can contribute to πŸ€— community events
Jupyter Notebook
368
star
58

nn_pruning

Prune a model while finetuning or training.
Jupyter Notebook
360
star
59

text-clustering

Easily embed, cluster and semantically label text datasets
Python
335
star
60

speechbox

Python
328
star
61

100-times-faster-nlp

πŸš€100 Times Faster Natural Language Processing in Python - iPython notebook
HTML
325
star
62

education-toolkit

Educational materials for universities
Jupyter Notebook
307
star
63

controlnet_aux

Python
306
star
64

optimum-intel

πŸ€— Optimum Intel: Accelerate inference with Intel optimization tools
Jupyter Notebook
295
star
65

datablations

Scaling Data-Constrained Language Models
Jupyter Notebook
293
star
66

unity-api

C#
284
star
67

open-muse

Open reproduction of MUSE for fast text2image generation.
Python
284
star
68

audio-transformers-course

The Hugging Face Course on Transformers for Audio
MDX
247
star
69

hub-docs

Docs of the Hugging Face Hub
221
star
70

lighteval

LightEval is a lightweight LLM evaluation suite that Hugging Face has been using internally with the recently released LLM data processing library datatrove and LLM training library nanotron.
Python
208
star
71

quanto

A pytorch Quantization Toolkit
Python
201
star
72

simulate

🎒 Creating and sharing simulation environments for embodied and synthetic data research
Python
185
star
73

ratchet

A cross-platform browser ML framework.
Rust
184
star
74

optimum-benchmark

A unified multi-backend utility for benchmarking Transformers, Timm, Diffusers and Sentence-Transformers with full support of Optimum's hardware optimizations & quantization schemes.
Python
183
star
75

hf_transfer

Rust
181
star
76

olm-datasets

Pipeline for pulling and processing online language model pretraining data from the web
Python
169
star
77

instruction-tuned-sd

Code for instruction-tuning Stable Diffusion.
Python
167
star
78

optimum-neuron

Easy, fast and very cheap training and inference on AWS Trainium and Inferentia chips.
Jupyter Notebook
163
star
79

llm-swarm

Manage scalable open LLM inference endpoints in Slurm clusters
Python
156
star
80

OBELICS

Code used for the creation of OBELICS, an open, massive and curated collection of interleaved image-text web documents, containing 141M documents, 115B text tokens and 353M images.
Python
147
star
81

workshops

Materials for workshops on the Hugging Face ecosystem
Jupyter Notebook
146
star
82

cosmopedia

Python
138
star
83

api-inference-community

Python
131
star
84

diffusion-fast

Faster generation with text-to-image diffusion models.
Python
127
star
85

diarizers

Python
106
star
86

optimum-habana

Easy and lightning fast training of πŸ€— Transformers on Habana Gaudi processor (HPU)
Python
106
star
87

sharp-transformers

A Unity plugin for using Transformers models in Unity.
C#
104
star
88

competitions

Python
101
star
89

hf-hub

Rust client for the huggingface hub aiming for minimal subset of features over `huggingface-hub` python package
Rust
93
star
90

olm-training

Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.
Python
87
star
91

fuego

[WIP] A πŸ”₯ interface for running code in the cloud
Python
84
star
92

tune

Python
83
star
93

datasets-viewer

Viewer for the πŸ€— datasets library.
Python
82
star
94

optimum-graphcore

Blazing fast training of πŸ€— Transformers on Graphcore IPUs
Python
78
star
95

frp

FRP Fork
Go
73
star
96

paper-style-guide

72
star
97

block_movement_pruning

Block Sparse movement pruning
Python
70
star
98

amused

Python
68
star
99

doc-builder

The package used to build the documentation of our Hugging Face repos
Python
67
star
100

data-measurements-tool

Developing tools to automatically analyze datasets
Python
67
star