• Stars
    star
    203
  • Rank 191,717 (Top 4 %)
  • Language
    Python
  • License
    MIT License
  • Created about 1 year ago
  • Updated about 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

This is an unofficial PyTorch implementation of StyleDrop: Text-to-Image Generation in Any Style.

StyleDrop

Huggingface

This is an unofficial PyTorch implementation of StyleDrop: Text-to-Image Generation in Any Style.

Unlike the parameters in the paper in (Round 1), we set $\lambda_A=2.0$, $\lambda_B=5.0$ and d_prj=32, is_shared=False, which we found work better, these hyperparameters can be seen in configs/custom.py.

we release them to facilitate community research.

result1

result2

result3

result4

result5

News

  • [07/06/2023] Online Gradio Demo is available here

Todo List

  • Release the code.
  • Add gradio inference demo (runs in local).
  • Add iterative training (Round 2).

Data & Weights Preparation

First, download VQGAN from this link (from MAGE, thanks!), and put the downloaded VQGAN in assets/vqgan_jax_strongaug.ckpt.

Then, download the pre-trained checkpoints from this link to assets/ckpts for evaluation or to continue training for more iterations.

finally, prepare empty_feature by runnig command python extract_empty_feature.py

And the final directory structure is as follows:

.
โ”œโ”€โ”€ assets
โ”‚   โ”œโ”€โ”€ ckpts
โ”‚   โ”‚   โ”œโ”€โ”€ cc3m-285000.ckpt
โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ lr_scheduler.pth
โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ nnet_ema.pth
โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ nnet.pth
โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ optimizer.pth
โ”‚   โ”‚   โ”‚   โ””โ”€โ”€ step.pth
โ”‚   โ”‚   โ””โ”€โ”€ imagenet256-450000.ckpt
โ”‚   โ”‚       โ”œโ”€โ”€ lr_scheduler.pth
โ”‚   โ”‚       โ”œโ”€โ”€ nnet_ema.pth
โ”‚   โ”‚       โ”œโ”€โ”€ nnet.pth
โ”‚   โ”‚       โ”œโ”€โ”€ optimizer.pth
โ”‚   โ”‚       โ””โ”€โ”€ step.pth
โ”‚   โ”œโ”€โ”€ fid_stats
โ”‚   โ”‚   โ”œโ”€โ”€ fid_stats_cc3m_val.npz
โ”‚   โ”‚   โ””โ”€โ”€ fid_stats_imagenet256_guided_diffusion.npz
โ”‚   โ”œโ”€โ”€ pipeline.png
|   โ”œโ”€โ”€ contexts
โ”‚   โ”‚   โ””โ”€โ”€ empty_context.npy
โ””โ”€โ”€ โ””โ”€โ”€ vqgan_jax_strongaug.ckpt

Dependencies

Same as MUSE-PyTorch.

conda install pytorch torchvision torchaudio cudatoolkit=11.3
pip install accelerate==0.12.0 absl-py ml_collections einops wandb ftfy==6.1.1 transformers==4.23.1 loguru webdataset==0.2.5 gradio

Train

All style data in the paper are placed in the data directory

  1. Modify data/one_style.json (It should be noted that one_style.json and style data must be in the same directory), The format is file_name:[object,style]
{"image_03_05.jpg":["A bear","in kid crayon drawing style"]}
  1. Training script as follows.
#!/bin/bash
unset EVAL_CKPT
unset ADAPTER
export OUTPUT_DIR="output_dir/for/this/experiment"
accelerate launch --num_processes 8 --mixed_precision fp16 train_t2i_custom_v2.py --config=configs/custom.py

Inference

The pretrained style_adapter weights can be downloaded from ๐Ÿค— Hugging Face.

#!/bin/bash
export EVAL_CKPT="assets/ckpts/cc3m-285000.ckpt" 
export ADAPTER="path/to/your/style_adapter"

export OUTPUT_DIR="output/for/this/experiment"

accelerate launch --num_processes 8 --mixed_precision fp16 train_t2i_custom_v2.py --config=configs/custom.py

Gradio Demo

Put the style_adapter weights in ./style_adapter folder and run the following command will launch the demo:

python gradio_demo.py

The demo is also hosted on HuggingFace.

Citation

@article{sohn2023styledrop,
  title={StyleDrop: Text-to-Image Generation in Any Style},
  author={Sohn, Kihyuk and Ruiz, Nataniel and Lee, Kimin and Chin, Daniel Castro and Blok, Irina and Chang, Huiwen and Barber, Jarred and Jiang, Lu and Entis, Glenn and Li, Yuanzhen and others},
  journal={arXiv preprint arXiv:2306.00983},
  year={2023}
}

Acknowlegment

Star History

More Repositories

1

AdelaiDet

AdelaiDet is an open source toolbox for multiple instance-level detection and recognition tasks.
Python
3,349
star
2

AdelaiDepth

This repo contains the projects: 'Virtual Normal', 'DiverseDepth', and '3D Scene Shape'. They aim to solve the monocular depth estimation, 3D scene reconstruction from single image problems.
Python
1,051
star
3

Matcher

[ICLR'24] Matcher: Segment Anything with One Shot Using All-Purpose Feature Matching
Python
401
star
4

Poseur

[ECCV 2022] The official repo for the paper "Poseur: Direct Human Pose Regression with Transformers".
Python
176
star
5

AutoStory

143
star
6

DyCo3D

Python
119
star
7

GenPercept

GenPercept: Diffusion Models Trained with Large Data Are Transferable Visual Models
Python
112
star
8

FrozenRecon

[ICCV2023] ๐ŸงŠFrozenRecon: Pose-free 3D Scene Reconstruction with Frozen Depth Models
Python
108
star
9

SegPrompt

Official Implementation of ICCV 2023 Paper - SegPrompt: Boosting Open-World Segmentation via Category-level Prompt Learning
Python
108
star
10

FreeCustom

[CVPR 2024] Official PyTorch implementation of FreeCustom: Tuning-Free Customized Image Generation for Multi-Concept Composition
Python
69
star
11

OIR

[ICLR 2024] Official PyTorch/Diffusers implementation of "Object-aware Inversion and Reassembly for Image Editing"
Python
69
star
12

RGM

68
star
13

GeoBench

A toolbox for benchmarking SOTA discriminative and generative geometry estimation models.
Python
42
star
14

GenDeF

38
star
15

DiverGen

DiverGen (CVPR 2024) & BSGAL (ICML 2024)
Python
32
star
16

partially-labelled

Learning to segment multi-organ and tumorsfrom multiple partially labeled datasets
19
star
17

FreeCompose

13
star
18

VFN

[ICLR 2024 Spotlight] The official repo for the paper "De novo Protein Design using Geometric Vector Field Networks".
Python
12
star
19

VLModel

Repo of HawkLlama.
Python
8
star
20

Depth3D

Python
7
star
21

FADiff

[ICML 2024] Floating Anchor Diffusion Model for Multi-motif Scaffolding
Python
6
star
22

STORY

4
star
23

LoRAPrune

Python
4
star
24

MovieDreamer

4
star
25

OIR-Diffusion

JavaScript
1
star