• Stars
    star
    249
  • Rank 162,987 (Top 4 %)
  • Language
    Python
  • License
    Apache License 2.0
  • Created almost 3 years ago
  • Updated over 2 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

MMGEN-FaceStylor

English | 简体中文

Introduction

This repo is an efficient toolkit for Face Stylization based on the paper "AgileGAN: Stylizing Portraits by Inversion-Consistent Transfer Learning". We note that since the training code of AgileGAN is not released yet, this repo merely adopts the pipeline from AgileGAN and combines other helpful practices in this literature.

This project is based on MMCV and MMGEN, star and fork is welcomed 🤗!

Results from FaceStylor trained by MMGEN

Requirements

  • CUDA 10.0 / CUDA 10.1
  • Python 3
  • PyTorch >= 1.6.0
  • MMCV-Full >= 1.3.15
  • MMGeneration >= 0.3.0

Setup

Step-1: Create an Environment

First, we should build a conda virtual environment and activate it.

conda create -n facestylor python=3.7 -y
conda activate facestylor

Suppose you have installed CUDA 10.1, you need to install the prebuilt PyTorch with CUDA 10.1.

conda install pytorch=1.6.0 cudatoolkit=10.1 torchvision -c pytorch

Step-2: Install MMCV and MMGEN

We can run the following command to install MMCV.

pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/cu101/torch1.6.0/index.html

Of course, you can also refer to the MMCV Docs to install it.

Next, we should install MMGEN containing the basic generative models that will be used in this project.

# Clone the MMGeneration repository.
git clone https://github.com/open-mmlab/mmgeneration.git
cd mmgeneration
# Install build requirements and then install MMGeneration.
pip install -r requirements.txt
pip install -v -e .  # or "python setup.py develop"
cd ..

Step-3: Clone repo and prepare the data and weights

Now, we need to clone this repo and install dependencies.

git clone https://github.com/open-mmlab/MMGEN-FaceStylor.git
cd MMGEN-FaceStylor
pip install -r requirements.txt

For convenience, we suggest that you make these folders under MMGEN-FaceStylor.

mkdir data
mkdir work_dirs
mkdir work_dirs/experiments
mkdir work_dirs/pre-trained

For testing and training, you need to download some necessary data provided by AgileGAN and put them under data folder. Or just run this:

wget --no-check-certificate 'https://docs.google.com/uc?export=download&id=1AavRxpZJYeCrAOghgtthYqVB06y9QJd3' -O data/shape_predictor_68_face_landmarks.dat

Then, you can put or create the soft-link for your data under data folder, and store your experiments under work_dirs/experiments.

or

wget --no-check-certificate https://github.com/JeffTrain/selfie/raw/master/shape_predictor_68_face_landmarks.dat -O data/shape_predictor_68_face_landmarks.dat

We also provide some pre-trained weights.

Pre-trained Weights
FFHQ-1024 StyleGAN2
FFHQ-256 StyleGAN2
IR-SE50 Model
Encoder for FFHQ-1024 StyleGAN2
Encoder for FFHQ-256 StyleGAN2
MetFace-Oil 1024 StyleGAN2
MetFace-Sketch 1024 StyleGAN2
Toonify 1024 StyleGAN2
Cartoon 256
Bitmoji 256
Comic 256
More Styles on the Way!

Play with MMGEN-FaceStylor

If you have followed the aforementioned steps, we can start to investigate FaceStylor!

Quick Try

To quickly try our project, please run the command below

python demo/quick_try.py demo/src.png --style toonify

Then, you can check the result in work_dirs/demos/agile_result.png.

  • If you want to play with your own photos, you can replace demo/src.png with your photo.
  • If you want to switch to another style, change toonify with other styles. Now, supported styles include toonify, oil, sketch, bitmoji, cartoon, comic.

Inversion

The inversion task will adopt a source image as input and return the most similar image that can be generated by the generator model.

For inversion, you can directly use agilegan_demo like this

python demo/agilegan_demo.py SOURCE_PATH CONFIG [--ckpt CKPT] [--device DEVICE] [--save-path SAVE_PATH]

Here, you should set SOURCE_PATH to your image path, CONFIG to the config file path, and CKPT to checkpoint path.

Take Celebahq-Encoder as an example, you need to download the weights to work_dirs/pre-trained/agile_encoder_celebahq1024x1024_lr_1e-4_150k.pth, put your test image under data run

python demo/agilegan_demo.py demo/src.png configs/agilegan/agile_encoder_celebahq1024x1024_lr_1e-4_150k.py --ckpt work_dirs/pre-trained/agile_encoder_celebahq_lr_1e-4_150k.pth

You will find the result work_dirs/demos/agile_result.png.

Stylization

Since the encoder and decoder of stylization can be trained from different configs, you're supposed to set their ckpts' path in config file. Take Metface-oil as an example, you can see the first two lines in config file.

encoder_ckpt_path = xxx
stylegan_weights = xxx

You should keep your actual weights path in line with your configs. Then run the same command without specifying CKPT.

python demo/agilegan_demo.py SOURCE_PATH CONFIG [--device DEVICE] [--save-path SAVE_PATH]

Train

Here I will tell you how to fine-tune with your own datasets. With only 100-200 images and less than one hour, you can train your own StyleGAN2. The only thing you need to do is to copy an agile_transfer config, like this one. Then modify the imgs_root with your actual data root, choose one of the two commands below to train your own model.

# For distributed training
bash tools/dist_train.sh ${CONFIG_FILE} ${GPUS_NUMBER} \
    --work-dir ./work_dirs/experiments/experiments_name \
    [optional arguments]
# For slurm training
bash tools/slurm_train.sh ${PARTITION} ${JOB_NAME} ${CONFIG} ${WORK_DIR} \
    [optional arguments]

Training Details

In this part, I will explain some training details, including ADA setting, layer freeze, and losses.

ADA Setting

To use adaptive discriminator augmentation in your discriminator, you can use ADAStyleGAN2Discriminator as your discriminator, and adjust ADAAug setting as follows:

model = dict(
    discriminator=dict(
                 type='ADAStyleGAN2Discriminator',
                 data_aug=dict(type='ADAAug',
                 aug_pipeline=aug_kwargs, # This and below arguments can be set by yourself.
                 update_interval=4,
                 augment_initial_p=0.,
                 ada_target=0.6,
                 ada_kimg=500,
                 use_slow_aug=False)))

Layer Freeze Setting

In transfer learning, it's a routine to freeze some layers in models. In GAN's literature, freezing the shallow layers of pre-trained generator and discriminator may help training convergence. FreezeD can be used for small data fine-tuning, FreezeG can be used for pseudo translation.

model = dict(
  freezeD=5, # set to -1 if not need
  freezeG=4 # set to -1 if not need
  )

Losses Setting

In AgileGAN, to preserve the recognizable identity of the generated image, they introduce a similarity loss at the perceptual level. You can adjust the lpips_lambda as follows:

model = dict(lpips_lambda=0.8)

Generally speaking, the larger lpips_lambda is, the better the recognizable identity can be kept.

Datasets Link

To make it easier for you to train your own models, here are some links to publicly available datasets.

Dataset Links
MetFaces
AFHQ
Toonify
photo2cartoon
selfie2anime
face2comics v2
High-Resolution Anime Face
Bitmoji

Applications

We also provide LayerSwap and DNI apps for the trade-off between the structure of the original image and the stylization degree. To this end, you can adjust some parameters to get your desired result.

LayerSwap

When Layer Swapping is applied, the generated images have a higher similarity to the source image than AgileGAN's results.

From Left to Right: Input, Layer-Swap with L = 4, 3, 2, xxx Output

Run this command line to with different SWAP_LAYER(1, 2, 3, 4, etc) :

python demo/quick_try.py demo/src.png --style toonify --swap-layer=SWAP_LAYER

and you can discover the result tends to be close to the source image.

We also provide a blending script to create and save the mixed weights.

python apps/blend_weights.py modelA modelB [--swap-layer SWAP_LAYER] [--show-input SHOW_INPUT] [--device DEVICE] [--save-path SAVE_PATH]

Here, modelA is the base model, where only the deep layers of its decoder will be replaced with modelB's counterpart.

DNI

Deep Network Interpolation between L4 and AgileGAN output

For more precise stylization control, you can try DNI with following commands:

python apps/dni.py source_path modelA modelB [--intervals INTERVALS] [--device DEVICE] [--save-folder SAVE_FOLDER]

Here, modelA and modelB are supposed to be PSPEncoderDecoder(configs start with agile_encoder) with decoders of different stylization degrees. INTERVALS is supposed to be the interpolation numbers.

You can also try applications in MMGEN, like interpolation and SeFA.

Interpolation


Indeed, we have provided an application script to users. You can use apps/interpolate_sample.py with the following commands for unconditional models’ interpolation:

python apps/interpolate_sample.py \
    ${CONFIG_FILE} \
    ${CHECKPOINT} \
    [--show-mode ${SHOW_MODE}] \
    [--endpoint ${ENDPOINT}] \
    [--interval ${INTERVAL}] \
    [--space ${SPACE}] \
    [--samples-path ${SAMPLES_PATH}] \
    [--batch-size ${BATCH_SIZE}] \

For more details, you can read related Docs.

Galary

Toonify





Oil





Cartoon





Comic





Bitmoji





Notions and TODOs

  • For encoder, I experimented with vae-encoder but found no significant improvement for inversion. I follow the "encoding into z plus space" way as the author does. I will release the vae-encoder version later, but I only offer a vanilla encoder this time.
  • For generator, I released vanilla stylegan2-generator, and attribute-aware generator will be released in next version.
  • For training settings, the parameters have slight difference from the paper. And I also tried ADA, freezeD and other methods not mentioned in paper.
  • More styles will be available in the next version.
  • More applications will be available in the next version.
  • Further code clean jobs.

Acknowledgments

Codes reference:

Display photos from: https://unsplash.com/t/people

Web demo powered by: https://gradio.app/

License

This project is released under the Apache 2.0 license. Some implementation in MMGEN-FaceStylor are with other licenses instead of Apache2.0. Please refer to LICENSES.md for the careful check, if you are using our code for commercial matters.

More Repositories

1

mmdetection

OpenMMLab Detection Toolbox and Benchmark
Python
29,487
star
2

mmsegmentation

OpenMMLab Semantic Segmentation Toolbox and Benchmark.
Python
7,992
star
3

mmagic

OpenMMLab Multimodal Advanced, Generative, and Intelligent Creation Toolbox. Unlock the magic 🪄: Generative-AI (AIGC), easy-to-use APIs, awsome model zoo, diffusion models, for text-to-image generation, image/video restoration/enhancement, etc.
Jupyter Notebook
6,909
star
4

mmcv

OpenMMLab Computer Vision Foundation
Python
5,879
star
5

mmpose

OpenMMLab Pose Estimation Toolbox and Benchmark.
Python
5,625
star
6

Amphion

Amphion (/æmˈfaɪən/) is a toolkit for Audio, Music, and Speech Generation. Its purpose is to support reproducible research and help junior researchers and engineers get started in the field of audio, music, and speech generation research and development.
Python
5,482
star
7

mmdetection3d

OpenMMLab's next-generation platform for general 3D object detection.
Python
5,216
star
8

OpenPCDet

OpenPCDet Toolbox for LiDAR-based 3D Object Detection.
Python
4,658
star
9

mmocr

OpenMMLab Text Detection, Recognition and Understanding Toolbox
Python
4,270
star
10

mmaction2

OpenMMLab's Next Generation Video Understanding Toolbox and Benchmark
Python
4,236
star
11

mmtracking

OpenMMLab Video Perception Toolbox. It supports Video Object Detection (VID), Multiple Object Tracking (MOT), Single Object Tracking (SOT), Video Instance Segmentation (VIS) with a unified framework.
Python
3,538
star
12

mmpretrain

OpenMMLab Pre-training Toolbox and Benchmark
Python
3,383
star
13

mmselfsup

OpenMMLab Self-Supervised Learning Toolbox and Benchmark
Python
3,182
star
14

mmyolo

OpenMMLab YOLO series toolbox and benchmark. Implemented RTMDet, RTMDet-Rotated,YOLOv5, YOLOv6, YOLOv7, YOLOv8,YOLOX, PPYOLOE, etc.
Python
2,967
star
15

mmskeleton

A OpenMMLAB toolbox for human pose estimation, skeleton-based action recognition, and action synthesis.
Python
2,928
star
16

mmdeploy

OpenMMLab Model Deployment Framework
Python
2,744
star
17

mmgeneration

MMGeneration is a powerful toolkit for generative models, based on PyTorch and MMCV.
Python
1,881
star
18

mmaction

An open-source toolbox for action understanding based on PyTorch
Python
1,853
star
19

mmrotate

OpenMMLab Rotated Object Detection Toolbox and Benchmark
Python
1,843
star
20

mmrazor

OpenMMLab Model Compression Toolbox and Benchmark.
Python
1,470
star
21

Multimodal-GPT

Multimodal-GPT
Python
1,461
star
22

mmfashion

Open-source toolbox for visual fashion analysis based on PyTorch
Python
1,245
star
23

mmhuman3d

OpenMMLab 3D Human Parametric Model Toolbox and Benchmark
Python
1,232
star
24

mmengine

OpenMMLab Foundational Library for Training Deep Learning Models
Python
1,161
star
25

playground

A central hub for gathering and showcasing amazing projects that extend OpenMMLab with SAM and other exciting features.
Python
1,117
star
26

OpenMMLabCourse

OpenMMLab course index and stuff
Jupyter Notebook
1,000
star
27

mmflow

OpenMMLab optical flow toolbox and benchmark
Python
942
star
28

PIA

[CVPR 2024] PIA, your Personalized Image Animator. Animate your images by text prompt, combing with Dreambooth, achieving stunning videos. PIA,你的个性化图像动画生成器,利用文本提示将图像变为奇妙的动画
Python
867
star
29

mmfewshot

OpenMMLab FewShot Learning Toolbox and Benchmark
Python
697
star
30

PowerPaint

[ECCV 2024] PowerPaint, a versatile image inpainting model that supports text-guided object inpainting, object removal, image outpainting and shape-guided object inpainting with only a single model. 一个高质量多功能的图像修补模型,可以同时支持插入物体、移除物体、图像扩展、形状可控的物体生成,只需要一个模型
Python
526
star
31

awesome-vit

400
star
32

OpenUnReID

PyTorch open-source toolbox for unsupervised or domain adaptive object re-ID.
Python
393
star
33

labelbee-client

Out-of-the-box Annotation Toolbox
JavaScript
380
star
34

FoleyCrafter

FoleyCrafter: Bring Silent Videos to Life with Lifelike and Synchronized Sounds. AI拟音大师,给你的无声视频添加生动而且同步的音效 😝
Python
379
star
35

mim

MIM Installs OpenMMLab Packages
Python
346
star
36

denseflow

Extracting optical flow and frames
C++
294
star
37

mmeval

A unified evaluation library for multiple machine learning libraries
Python
254
star
38

labelbee

LabelBee is an annotation Library
TypeScript
244
star
39

Live2Diff

Live2Diff: A Pipeline that processes Live video streams by a uni-directional video Diffusion model.
Python
150
star
40

OpenMMLabCamp

Jupyter Notebook
93
star
41

polynet

The Github Repo for PolyNet
77
star
42

CLUE

C++ Lightweight Utility Extensions
C++
70
star
43

AnyControl

[ECCV 2024] AnyControl, a multi-control image synthesis model that supports any combination of user provided control signals. 一个支持用户自由输入控制信号的图像生成模型,能够根据多种控制生成自然和谐的结果!
Python
66
star
44

StyleShot

StyleShot: A SnapShot on Any Style. 一款可以迁移任意风格到任意内容的模型,无需针对图片微调,即能生成高质量的个性风格化图片!
Python
59
star
45

mim-example

Python
58
star
46

mmengine-template

Python
49
star
47

ecosystem

37
star
48

mmstyles

Latex style file to facilitate writing of technical papers
TeX
37
star
49

mmpose-webcam-demo

Python
25
star
50

pre-commit-hooks

Python
17
star
51

mdformat-openmmlab

Python
6
star
52

.github

4
star