• Stars
    star
    142
  • Rank 258,425 (Top 6 %)
  • Language
    Python
  • License
    Other
  • Created about 3 years ago
  • Updated about 3 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Official implementation for paper: A Latent Transformer for Disentangled Face Editing in Images and Videos.

A Latent Transformer for Disentangled Face Editing in Images and Videos

Official implementation for paper: A Latent Transformer for Disentangled Face Editing in Images and Videos.

[Video Editing Results]

Requirements

Dependencies

  • Python 3.6
  • PyTorch 1.8
  • Opencv
  • Tensorboard_logger

You can install a new environment for this repo by running

conda env create -f environment.yml
conda activate lattrans 

Prepare StyleGAN2 encoder and generator

  • We use the pretrained StyleGAN2 encoder and generator released from paper Encoding in Style: a StyleGAN Encoder for Image-to-Image Translation. Download and save the official implementation to pixel2style2pixel/ directory. Download and save the pretrained model to pixel2style2pixel/pretrained_models/.

  • In order to save the latent codes to the designed path, we slightly modify pixel2style2pixel/scripts/inference.py.

    # modify run_on_batch()
    if opts.latent_mask is None:
        result_batch = net(inputs, randomize_noise=False, resize=opts.resize_outputs, return_latents=True)
        
    # modify run()
    tic = time.time()
    result_batch, latent_batch = run_on_batch(input_cuda, net, opts) 
    latent_save_path = os.path.join(test_opts.exp_dir, 'latent_code_%05d.npy'%global_i)
    np.save(latent_save_path, latent_batch.cpu().numpy())
    toc = time.time()
    

Training

  • Prepare the training data

    To train the latent transformers, you can download our prepared dataset to the directory data/ and the pretrained latent classifier to the directory models/.

    sh download.sh
    

    You can also prepare your own training data. To achieve that, you need to map your dataset to latent codes using the StyleGAN2 encoder. The corresponding label file is also required. You can continue to use our pretrained latent classifier. If you want to train your own latent classifier on new labels, you can use pretraining/latent_classifier.py.

  • Training

    You can modify the training options of the config file in the directory configs/.

    python train.py --config 001 
    

Testing

Single Attribute Manipulation

Make sure that the latent classifier is downloaded to the directory models/ and the StyleGAN2 encoder is prepared as required. After training your latent transformers, you can use test.py to run the latent transformer for the images in the test directory data/test/. We also provide several pretrained models here (run download.sh to download them). The output images will be saved in the folder outputs/. You can change the desired attribute with --attr.

python test.py --config 001 --attr Eyeglasses --out_path ./outputs/

If you want to test the model on your custom images, you need to first encoder the images to the latent space of StyleGAN using the pretrained encoder.

cd pixel2style2pixel/
python scripts/inference.py \
--checkpoint_path=pretrained_models/psp_ffhq_encode.pt \
--data_path=../data/test/ \
--exp_dir=../data/test/ \
--test_batch_size=1

Sequential Attribute Manipulation

You can reproduce the sequential editing results in the paper using notebooks/figure_sequential_edit.ipynb and the results in the supplementary material using notebooks/figure_supplementary.ipynb.

User Interface

We also provide an interactive visualization notebooks/visu_manipulation.ipynb, where the user can choose the desired attributes for manipulation and define the magnitude of edit for each attribute.

Video Manipulation

Video Result

We provide a script to achieve attribute manipulation for the videos in the test directory data/video/. Please ensure that the StyleGAN2 encoder is prepared as required. You can upload your own video and modify the options in run_video_manip.sh. You can view our video editing results presented in the paper.

sh run_video_manip.sh

Citation

@article{yao2021latent,
  title={A Latent Transformer for Disentangled Face Editing in Images and Videos},
  author={Yao, Xu and Newson, Alasdair and Gousseau, Yann and Hellier, Pierre},
  journal={2021 International Conference on Computer Vision},
  year={2021}
}

License

Copyright © 2021, InterDigital R&D France. All rights reserved.

This source code is made available under the license found in the LICENSE.txt in the root directory of this source tree.

More Repositories

1

CompressAI

A PyTorch library and evaluation platform for end-to-end compression research
Python
1,153
star
2

HRFAE

Official implementation for paper High Resolution Face Age Editing
Python
299
star
3

FeatureStyleEncoder

Jupyter Notebook
90
star
4

CompressAI-Vision

CompressAI-Vision helps you design, test and compare Video Compression for Machines pipelines. Compression methods can be either pulled from custom AI-based modules from CompressAI or traditional codecs such as H.266/VVC.
Python
81
star
5

UnderPressure

Python
54
star
6

AdvantEDGE

AdvantEDGE, Mobile Edge Emulation Platform
Go
51
star
7

PccAI

PccAI is a framework for AI-based point cloud compression
Python
31
star
8

GRASP-Net

Source code of "GRASP-Net: Geometric Residual Analysis and Synthesis for Point Cloud Compression"
Python
27
star
9

TearingNet

Source code for the paper "TearingNet: Point Cloud Autoencoder to Learn Topology-Friendly Representations"
Python
26
star
10

VTMDecoder_VLCPlugin

VVC decoder plugin for VLC based on a multithreaded VTM
C
21
star
11

FESTA

Source code for the paper "FESTA: Flow Estimation via Spatial-Temporal Attention for Scene Point Clouds"
Python
18
star
12

CompressAI-Trainer

Training platform for End-to-End compression models, losses and metrics defined in Compressai
Python
17
star
13

SADL

C++
14
star
14

DeepFilmGrain

Official implementation for paper : Deep-based film grain removal and synthesis
Python
12
star
15

DialogSummary-VideoQA

Python
10
star
16

GstH266Enc

An open-source custom Gstreamer plugin for Versatile Video Coding (VVC) encoder.
C
9
star
17

Fireball

Deep Neural Network Library
Jupyter Notebook
9
star
18

VersatileFilmGrain

Software model of hardware-friendly film grain synthesis, supporting MPEG FGC SEI and AOM/AV1 parameters
C
7
star
19

QuantumChannelDecoding

A quantum algorithm for channel decoding of linear block codes such as Hamming codes and Polar codes
Jupyter Notebook
6
star
20

EdgeLink

EdgeLink, SDN controlled wireless mesh network
Java
5
star
21

StyleBasedFilmGrain

Official implementation for paper : Style-based film grain analysis and synthesis
4
star
22

ActiveLearning

Codes related to our paper Efficacy of Bayesian Neural Networks in Active Learning CVPR 2021
Jupyter Notebook
4
star
23

ObjectRemovalDetection

Detection of Removed Objects in 3D Meshes Using Up-to-Date Images for Mixed Reality Applications
C++
3
star
24

doec

Deep Octree Entropy Coding
Python
3
star
25

CoLDE

Companion Code for TKDD 2022 Journal Paper: "Supervised Contrastive Learning for Interpretable Long-Form Document Matching"
Python
2
star
26

ARDENT

Agnostic platfoRm DEploymeNt orchesTrator (ARDENT)
Go
1
star
27

GraphBasedSceneRegistration

Graph-based Model Registration for Semantically Rich Scenes
C++
1
star
28

nf-pcac

Python
1
star
29

IDCC_SkineticDriver

IDCC_SkineticDriver
C++
1
star