• Stars
    star
    266
  • Rank 148,538 (Top 4 %)
  • Language
    Python
  • License
    MIT License
  • Created over 4 years ago
  • Updated almost 4 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Simple Tensorflow 2.x Cookbook for easy-to-use

Tensorflow1 Cookbook

Contributions

In now, this repo provides general architectures and functions that are useful for the GAN and classification.

I will continue to add useful things to other areas.

Also, your pull requests and issues are always welcome.

And tell me what you want to implement on the issue. I'll implement it.

Functional vs Sequential

Functional API [Template code]

Pros

  • More fast than Sequential
  • More easy to create a flexible model architecture
    • Easy to use some layer operaions like concatenate, add , ...

Cons

  • Define tf.keras.layers.Input
    • You have to know the shape of input tensor
  • Define tf.keras.Model
    • You have to create the Model constructor at the end

Sequential API [Template code]

Pros

  • Simple to use
    • Similar to Pytorch style

Cons

  • Hard to create a flexible model architecture

Example code


How to use

1. Import

Funtional API

  • ops_functional.py
    • Functional API operations
    • from ops_functional import *

Sequential API

  • ops_sequential.py
    • Sequential API operations
    • from ops_sequential import *

Common

  • utils.py
    • image processing + something useful functions (e.g. automatic_gpu_usage)
      • automatic_gpu_usage : Automatically manage gpu memory
      • multiple_gpu_usage : You can set gpu memory limit
    • from utils import *

2. Network template

Functional API

from ops_functional import *
from utils import *

automatic_gpu_usage() # for efficient gpu use

input_shape = [img_height, img_width, img_ch]
inputs = tf.keras.layers.Input(input_shape, name='input')

# architecture
x = conv(inputs, channels=64, kernel=3, stride=2, pad=1, pad_type='reflect', use_bias=False, sn=False, name='conv')
x = batch_norm(x, name='batch_norm')
x = relu(x)

x = global_avg_pooling(x)
x = fully_connected(x, units=10, sn=False, name='fc')

model = tf.keras.Model(inputs, s, name='model')
optimizer = tf.keras.optimizers.Adam(learning_rate=0.01)

Sequential API

from ops_sequential import *
from utils import *

automatic_gpu_usage() # for efficient gpu use

model = []

model += [Conv(channels=64, kernel=3, stride=2, pad=1, pad_type='reflect', use_bias=False, sn=False, name='conv')]
model += [BatchNorm(name)]
model += [Relu()]

model += [Global_Avg_Pooling()]
model += [FullyConnected(units=10, sn=False, name='fc')]

model = Sequential(model, name='model')
optimizer = tf.keras.optimizers.Adam(learning_rate=0.01)

3. Data pipeline

img_class = Image_data(img_height, img_width, img_ch, dataset_path, augment_flag)
img_class.preprocess()

img_slice = tf.data.Dataset.from_tensor_slices(img_class.dataset)
gpu_device = '/gpu:0'
img_slice = img_slice. \
                apply(shuffle_and_repeat(dataset_num)). \
                apply(map_and_batch(img_class.image_processing, self.batch_size,
                                    num_parallel_batches=AUTOTUNE,
                                    drop_remainder=True)). \
                apply(prefetch_to_device(gpu_device, AUTOTUNE))

dataset_iter = iter(img_slice)

4. Restore

ckpt = tf.train.Checkpoint(model=model, optimizer=optimizer)
manager = tf.train.CheckpointManager(ckpt, checkpoint_dir, max_to_keep=2)
start_iteration = 0

if manager.latest_checkpoint:
  ckpt.restore(manager.latest_checkpoint)
  start_iteration = int(manager.latest_checkpoint.split('-')[-1])
  print('Latest checkpoint restored!!')
else:
  print('Not restoring from saved checkpoint')

5-1. Train

def train_step(img):
  with tf.GradientTape() as tape:
    logit = model(img)
    
    # calculate loss
    """
    if classification
    your_loss = cross_entropy_loss(logit, label)
    """
    
    loss = your_loss + regularization_loss(model)
  
  train_variable = model.trainable_variables
  gradient = tape.gradient(loss, train_variable)
  optimizer.apply_gradients(zip(gradient, train_variable))
  
  return loss

def train():
  # setup tensorboard
  summary_writer = tf.summary.create_file_writer(log_dir)
  
  for idx in range(start_iteration, total_iteration):
    img = next(dataset_iter)
    
    
    # update network
    loss = train_step(img)
    
    # save to tensorboard
    with summary_writer.as_default():
      tf.summary.scalar('loss', loss, step=idx)
    
    # save ckpt
    manager.save(checkpoint_number=idx + 1)
  
  # save model for final step
  manager.save(checkpoint_number=total_iteration)

5-2. Multi-GPUs train [Template code]

strategy = tf.distribute.MirroredStrategy()
NUM_GPUS = strategy.num_replicas_in_sync

total_iteration = iteration // NUM_GPUS

with strategy.scope():
  # copy & paste
  # 2. Network template
  # 3. Data pipeline
  # 4. Restore

def train_step(img):
  """ SAME """
  
def distribute_train_step(img):
  with strategy.scope():
    loss = strategy.experimental_run_v2(train_step, args=(img))
    
    loss = strategy.reduce(tf.distribute.ReduceOp.MEAN, loss, axis=None)
    
    return loss

def train():
  # setup tensorboard
  summary_writer = tf.summary.create_file_writer(log_dir)
  
  for idx in range(start_iteration, total_iteration):
    img = next(dataset_iter)
    
    # update network
    loss = distribute_train_step(img)
    
    """
    SAME
    """

Weight

weight_initializer = tf.initializers.RandomNormal(mean=0.0, stddev=0.02)
weight_regularizer = tf.keras.regularizers.l2(0.0001)
weight_regularizer_fully = tf.keras.regularizers.l2(0.0001)

Initialization

  • Xavier : tf.initializers.GlorotUniform() or tf.initializers.GlorotNormal()
  • He : tf.initializers.VarianceScaling()
  • Normal : tf.initializers.RandomNormal(mean=0.0, stddev=0.02)
  • Truncated normal : tf.initializers.TruncatedNormal(mean=0.0, stddev=0.02)
  • Orthogonal : tf.initializers.Orthogonal0.02)

Regularization

  • l2_decay : tf.keras.regularizers.l2(0.0001)
  • orthogonal_regularizer : orthogonal_regularizer(0.0001) # orthogonal_regularizer_fully(0.0001)

Option

  • padding='SAME'
    • pad = ceil[ (kernel - stride) / 2 ]
  • pad_type
    • 'zero' or 'reflect'
  • sn
    • use spectral normalization of not

Examples of Functional API

Recurrent

x = various_rnn(x, n_hidden=128, n_layer=2, dropout_rate=0.5, training=True, bidirectional=True, rnn_type='lstm', name='rnn')

LSTM

GRU

Bidirectional

Deep (n_layer > 1)

Convolution

basic conv

x = conv(x, channels=64, kernel=3, stride=2, pad=1, pad_type='reflect', use_bias=True, sn=True, name='conv')

partial conv (NVIDIA Partial Convolution)

x = partial_conv(x, channels=64, kernel=3, stride=2, use_bias=True, padding='SAME', sn=True, name='partial_conv')

p_conv p_result

dilated conv

x = dilate_conv(x, channels=64, kernel=3, rate=2, use_bias=True, padding='VALID', sn=True, name='dilate_conv')

Deconvolution

basic deconv

x = deconv(x, channels=64, kernel=3, stride=1, padding='SAME', use_bias=True, sn=True, name='deconv')

Fully-connected

x = fully_connected(x, units=64, use_bias=True, sn=True, snamecope='fully_connected')

Pixel shuffle

x = conv_pixel_shuffle_down(x, scale_factor=2, use_bias=True, sn=True, name='pixel_shuffle_down')
x = conv_pixel_shuffle_up(x, scale_factor=2, use_bias=True, sn=True, name='pixel_shuffle_up')
  • down ===> [height, width] -> [height // scale_factor, width // scale_factor]
  • up ===> [height, width] -> [height * scale_factor, width * scale_factor]

pixel_shuffle


Block

residual block

x = resblock(x, channels=64, is_training=is_training, use_bias=True, sn=True, name='residual_block')
x = resblock_down(x, channels=64, is_training=is_training, use_bias=True, sn=True, name='residual_block_down')
x = resblock_up(x, channels=64, is_training=is_training, use_bias=True, sn=True, name='residual_block_up')
  • down ===> [height, width] -> [height // 2, width // 2]
  • up ===> [height, width] -> [height * 2, width * 2]

dense block

x = denseblock(x, channels=64, n_db=6, is_training=is_training, use_bias=True, sn=True, name='denseblock')
  • n_db ===> The number of dense-block

residual-dense block

x = res_denseblock(x, channels=64, n_rdb=20, n_rdb_conv=6, is_training=is_training, use_bias=True, sn=True, name='res_denseblock')
  • n_rdb ===> The number of RDB
  • n_rdb_conv ===> per RDB conv layer

attention block

x = self_attention(x, use_bias=True, sn=True, name='self_attention')
x = self_attention_with_pooling(x, use_bias=True, sn=True, name='self_attention_version_2')

x = squeeze_excitation(x, ratio=16, use_bias=True, sn=True, name='squeeze_excitation')

x = convolution_block_attention(x, ratio=16, use_bias=True, sn=True, name='convolution_block_attention')

x = global_context_block(x, use_bias=True, sn=True, name='gc_block')

x = srm_block(x, use_bias=False, is_training=is_training, name='srm_block')





Normalization

x = batch_norm(x, training=training, name='batch_norm')
x = layer_norm(x, name='layer_norm')
x = instance_norm(x, name='instance_norm')
x = group_norm(x, groups=32, name='group_norm')

x = pixel_norm(x)

x = batch_instance_norm(x, name='batch_instance_norm')
x = layer_instance_norm(x, name='layer_instance_norm')
x = switch_norm(x, scope='switch_norm')

x = condition_batch_norm(x, z, training=training, name='condition_batch_norm'):

x = adaptive_instance_norm(x, gamma, beta)
x = adaptive_layer_instance_norm(x, gamma, beta, smoothing=True, name='adaLIN')
  • See this for how to use condition_batch_norm
  • See this for how to use adaptive_instance_norm
  • See this for how to use adaptive_layer_instance_norm & layer_instance_norm

Activation

x = relu(x)
x = lrelu(x, alpha=0.2)
x = tanh(x)
x = sigmoid(x)
x = swish(x)
x = elu(x)

Pooling & Resize

x = nearest_up_sample(x, scale_factor=2)
x = bilinear_up_sample(x, scale_factor=2)
x = nearest_down_sample(x, scale_factor=2)
x = bilinear_down_sample(x, scale_factor=2)

x = max_pooling(x, pool_size=2)
x = avg_pooling(x, pool_size=2)

x = global_max_pooling(x)
x = global_avg_pooling(x)

x = flatten(x)
x = hw_flatten(x)

Loss

classification loss

loss, accuracy = classification_loss(logit, label)

loss = dice_loss(n_classes=10, logit, label)

regularization loss

model_reg_loss = regularization_loss(model)
  • If you want to use regularizer, then you should write it

pixel loss

loss = L1_loss(x, y)
loss = L2_loss(x, y)
loss = huber_loss(x, y)
loss = histogram_loss(x, y)

loss = gram_style_loss(x, y)

loss = color_consistency_loss(x, y)
  • histogram_loss means the difference in the color distribution of the image pixel values.
  • gram_style_loss means the difference between the styles using gram matrix.
  • color_consistency_loss means the color difference between the generated image and the input image.

gan loss

d_loss = discriminator_loss(Ra=True, gan_type='wgan-gp', real_logit=real_logit, fake_logit=fake_logit)
g_loss = generator_loss(Ra=True, gan_type='wgan-gp', real_logit=real_logit, fake_logit=fake_logit)

vdb loss

d_bottleneck_loss = vdb_loss(real_mu, real_logvar, i_c) + vdb_loss(fake_mu, fake_logvar, i_c)

kl-divergence (z ~ N(0, 1))

loss = kl_loss(mean, logvar)

Author

Junho Kim

More Repositories

1

UGATIT

Official Tensorflow implementation of U-GAT-IT: Unsupervised Generative Attentional Networks with Adaptive Layer-Instance Normalization for Image-to-Image Translation (ICLR 2020)
Python
6,182
star
2

Tensorflow-Cookbook

Simple Tensorflow Cookbook for easy-to-use
Python
2,798
star
3

SENet-Tensorflow

Simple Tensorflow implementation of "Squeeze and Excitation Networks" using Cifar10 (ResNeXt, Inception-v4, Inception-resnet-v2)
Python
750
star
4

StarGAN-Tensorflow

Simple Tensorflow implementation of StarGAN (CVPR 2018 Oral)
Python
715
star
5

Self-Attention-GAN-Tensorflow

Simple Tensorflow implementation of "Self-Attention Generative Adversarial Networks" (SAGAN)
Python
540
star
6

Densenet-Tensorflow

Simple Tensorflow implementation of Densenet using Cifar10, MNIST
Python
507
star
7

SPADE-Tensorflow

Simple Tensorflow implementation of "Semantic Image Synthesis with Spatially-Adaptive Normalization" a.k.a. GauGAN, SPADE (CVPR 2019 Oral)
Python
359
star
8

MUNIT-Tensorflow

Simple Tensorflow implementation of "Multimodal Unsupervised Image-to-Image Translation" (ECCV 2018)
Python
299
star
9

Vector_Similarity

Python, Java implementation of TS-SS called from "A Hybrid Geometric Approach for Measuring Similarity Level Among Documents and Document Clustering"
Python
285
star
10

BigGAN-Tensorflow

Simple Tensorflow implementation of "Large Scale GAN Training for High Fidelity Natural Image Synthesis" (BigGAN)
Python
261
star
11

vit-tensorflow

Vision Transformer Cookbook with Tensorflow
Python
236
star
12

CartoonGAN-Tensorflow

Simple Tensorflow implementation of CartoonGAN (CVPR 2018)
Python
217
star
13

StyleGAN-Tensorflow

Simple & Intuitive Tensorflow implementation of StyleGAN (CVPR 2019 Oral)
Python
211
star
14

GAN_Metrics-Tensorflow

Simple Tensorflow implementation of metrics for GAN evaluation (Inception score, Frechet-Inception distance, Kernel-Inception distance)
Python
205
star
15

ResNet-Tensorflow

Simple Tensorflow implementation of pre-activation ResNet18, 34, 50, 101, 152
Python
179
star
16

ResNeXt-Tensorflow

Simple Tensorflow implementation of ResNeXt using Cifar10
Python
159
star
17

AdaBound-Tensorflow

Simple Tensorflow implementation of "Adaptive Gradient Methods with Dynamic Bound of Learning Rate" (ICLR 2019)
Python
150
star
18

Spectral_Normalization-Tensorflow

Simple Tensorflow Implementation of "Spectral Normalization for Generative Adversarial Networks" (ICLR 2018)
Python
140
star
19

DRIT-Tensorflow

Simple Tensorflow implementation of "Diverse Image-to-Image Translation via Disentangled Representations" (ECCV 2018 Oral)
Python
117
star
20

StarGAN_v2-Tensorflow

Simple Tensorflow implementation of StarGAN_v2
Python
112
star
21

AMSGrad-Tensorflow

Simple Tensorflow implementation of "On the Convergence of Adam and Beyond" (ICLR 2018)
Python
103
star
22

RAdam-Tensorflow

Simple Tensorflow implementation of "On The Variance Of The Adaptive Learning Rate And Beyond"
Python
97
star
23

UNIT-Tensorflow

Simple Tensorflow implementation of "Unsupervised Image to Image Translation Networks" (NIPS 2017 Spotlight)
Python
96
star
24

Awesome-DeepLearning-Study

Summary of DeepLearning (Korean and English are included)
Python
93
star
25

partial_conv-Tensorflow

Simple Tensorflow implementation of "Partial Convolution based Padding" (partialconv)
Python
90
star
26

FusionGAN-Tensorflow

Simple Tensorflow implementation of FusionGAN (CVPR 2018)
Python
79
star
27

Tensorflow-DatasetAPI

Simple Tensorflow DatasetAPI Tutorial for reading image
Python
73
star
28

TripleGAN-Tensorflow

Simple Tensorflow implementation of Triple Generative Adversarial Nets (NIPS 2017)
Python
68
star
29

FUNIT-Tensorflow

Simple Tensorflow implementation of "Few-Shot Unsupervised Image-to-Image Translation" (ICCV 2019)
Python
65
star
30

SphereGAN-Tensorflow

Simple Tensorflow implementation of SphereGAN (CVPR 2019 Oral)
Python
57
star
31

GDWCT-Tensorflow

Simple Tensorflow implementation of "Image-to-Image Translation via Group-wise Deep Whitening-and-Coloring Transformation" (CVPR 2019 Oral)
Python
57
star
32

RelativisticGAN-Tensorflow

Simple Tensorflow implementation of RelativisticGAN
Python
51
star
33

AdamP-Tensorflow

Tensorflow Implementation of "Slowing Down the Weight Norm Increase in Momentum-based Optimizers"
Python
47
star
34

Batch_Instance_Normalization-Tensorflow

Simple Tensorflow implementation of Batch-Instance Normalization (NIPS 2018)
Python
40
star
35

GCNet-Tensorflow

Simple Tensorflow implementation of "GCNet: Non-local Networks Meet Squeeze-Excitation Networks and Beyond"
Python
39
star
36

Diffusion-Tensorflow

Tensorflow implementations of Diffusion models (DDPM, DDIM)
Python
37
star
37

CNN-Architecture-Summary

Simple Summary of CNN Architecture
34
star
38

tf-torch-template

Deep learning project template with tensorflow & pytorch (multi-gpu version)
Python
32
star
39

Switchable_Normalization-Tensorflow

Simple Tensorflow implementation of "Switchable Normalization"
Python
29
star
40

AdaConv-Tensorflow

Simple Tensorflow implementation of "Adaptive Convolutions for Structure-Aware Style Transfer" (CVPR 2021)
Python
26
star
41

AttnGAN-Tensorflow

Simple Tensorflow implementation of "AttnGAN: Fine-Grained Text to Image Generation with Attentional Generative Adversarial Networks" (CVPR 2018)
Python
26
star
42

CLIP-Tensorflow

Simple Tensorflow implementation of CLIP
Python
24
star
43

Image_similarity_with_elastic_search

Find the original image of the converted image with elastic search
Python
22
star
44

RealnessGAN-Tensorflow

Simple Tensorflow implementation of "RealnessGAN: Real or Not Real, that is the Question" (ICLR 2020 Spotlight)
Python
22
star
45

ControlGAN-Tensorflow

Simple Tensorflow implementation of "ControlGAN: Controllable Text-to-Image Generation" (NeurIPS 2019)
Python
19
star
46

CycleGAN-Tensorflow

Simple Tensorflow implementation of CycleGAN
Python
18
star
47

diffusion-pytorch

이화여대 강의자료
Python
18
star
48

SRM-Tensorflow

Simple Tensorflow implementation of "SRM : A Style-based Recalibration Module for Convolutional Neural Networks"
Python
18
star
49

GAN-Tensorflow

An implementation of GAN using TensorFlow
Python
17
star
50

SDIT-Tensorflow

Simple Tensorflow implementation of "SDIT: Scalable and Diverse Cross-domain Image Translation" (ACM-MM 2019)
Python
16
star
51

StableGAN-Tensorflow

Simple Tensorflow implementation of "Stabilizing Adversarial Nets With Prediction Methods" (ICLR 2018)
Python
16
star
52

Toward_spatial_unbiased-Tensorflow

Simple Tensorflow implementation of "Toward Spatially Unbiased Generative Models" (ICCV 2021)
Python
16
star
53

denoising-diffusion-gan-Tensorflow

Tensorflow implementation of "Tackling the Generative Learning Trilemma with Denoising Diffusion GANs" (ICLR 2022 Spotlight)
Python
15
star
54

MirrorGAN-Tensorflow

Simple Tensorflow implementation of "MirrorGAN: Learning Text-to-image Generation by Redescription" (CVPR 2019)
Python
15
star
55

Word2VecJava

Word2Vec In Java (2013 google word2vec opensource)
Java
14
star
56

StackGAN-Tensorflow

Simple Tensorflow implementation of "StackGAN: Text to Photo-realistic Image Synthesis with Stacked Generative Adversarial Networks" (ICCV 2017 Oral)
Python
13
star
57

MDGAN-Tensorflow

Simple Tensorflow implementation of "MDGAN: Mixture Density Generative Adversarial Networks" (CVPR 2019)
Python
11
star
58

DiscoGAN-Tensorflow

Simple Tensorflow implementation of DiscoGAN
Python
11
star
59

stylegan2-pytorch

Pytorch implementation of StyleGAN2 in my style
Python
11
star
60

pix2pix-Tensorflow

SImple Tensorflow implementations of " Image-to-Image Translation with Conditional Adversarial Networks" (CVPR 2017)
Python
11
star
61

DCGAN-Tensorflow

SImple Tensorflow implementation of "Deep Convolutional Generative Adversarial Networks"
Python
10
star
62

taki0112

8
star
63

coding_interview

Programmers coding interview in Korean
Python
8
star
64

CIPS-Tensorflow

Simple Tensorflow implementation of "Image Generators with Conditionally-Independent Pixel Synthesis" (CVPR 2021 Oral)
Python
7
star
65

Image_classification_CNN-Tensorflow

Classify dog and cat images of kaggle data
Python
7
star
66

TFIDF_Java

Get TF-IDF of Words
Java
4
star
67

CNN_Tensorflow

Convolutional Neural Network with Tensorflow, MNIST data
Python
4
star
68

grid_sample-Tensorflow

Tensorflow implementation of the grid_sample of pytorch.
Python
3
star
69

NiN-Tensorflow

Simple Tensorflow implementation of Network in Network
Python
2
star
70

Naver-Keyword_Analysis

Keyword Analysis from Naver Hack Day
Java
2
star
71

Deep-Q-network

Reinforcement study
Python
1
star
72

mnist_embedding

Python
1
star
73

Bamboo

1
star