• Stars
    star
    157
  • Rank 238,399 (Top 5 %)
  • Language
    Jupyter Notebook
  • Created over 3 years ago
  • Updated 6 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Pytorch implementation of AnimeGAN for fast photo animation

AnimeGAN Pytorch Open In Colab

Pytorch implementation of AnimeGAN for fast photo animation

Input Animation
c1 g1

Documentation

1. Prepare dataset

1.1 To download dataset from paper, run below command

wget -O anime-gan.zip https://github.com/ptran1203/pytorch-animeGAN/releases/download/v1.0/dataset_v1.zip
unzip anime-gan.zip -d /content

=> The dataset folder can be found in your current folder with name dataset

1.2 Create custom data from anime video

You need to have a video file located in your machine, for example: /home/ubuntu/Downloads/kimetsu_yaiba.mp4

Step 1. Create anime images from the video

python3 script/video_to_images.py --video-path /home/ubuntu/Downloads/kimetsu_yaiba.mp4\
                                --save-path dataset/Kimetsu/style\
                                --max-image 1800\
                                --image-size 256\

Step 2. Create edge-smooth version of dataset from Step 1.

python3 script/edge_smooth.py --dataset Kimetsu --image-size 256

2. Train animeGAN

To train the animeGAN from command line, you can run train.py as the following:

python3 train.py --dataset Hayao\           # Can be Hayao, Shinkai, Kimetsu, Paprika, SummerWar or {your custom data in step 1.2}
                --batch 6\
                --init-epochs 4\
                --checkpoint-dir {ckp_dir}\
                --save-image-dir {save_img_dir}\
                --save-interval 1\
                --gan-loss lsgan\           # one of [lsgan, hinge, bce]
                --init-lr 0.0001\
                --lr-g 0.00002\
                --lr-d 0.00004\
                --wadvd 10.0\               # Aversarial loss weight for D
                --wadvg 10.0\               # Aversarial loss weight for G
                --wcon 1.5\                 # Content loss weight
                --wgra 3.0\                 # Gram loss weight
                --wcol 30.0\                # Color loss weight
                --resume GD\                # if set, G to start from pre-trained G, GD to continue training GAN
                --use_sn\                   # If set, use spectral normalization, default is False

3. Transform images

To convert images in a folder or single image, run inference_image.py, for example:

--src and --dest can be a directory or a file

python3 inference_image.py --checkpoint {ckp_dir}\
                        --src /content/test/HR_photo\
                        --dest {working_dir}/inference_image_v2\

4. Transform video

To convert a video to anime version, run inference_video.py, for example:

Be careful when choosing --batch-size, it might lead to CUDA memory error if the resolution of the video is too large

python3 inference_video.py --checkpoint {ckp_dir}\
                        --src /content/test_vid_3.mp4\
                        --dest /content/test_vid_3_anime.mp4\
                        --batch-size 2

Anime transformation results (see more)

Input Output(Hayao style)
c1 g1
c1 g1
c1 g1
c1 g1
c1 g1
c1 g1

Check list

  • Add Google Colab
  • Add implementation details
  • Add and train on other data