• Stars
    star
    859
  • Rank 50,970 (Top 2 %)
  • Language
    Python
  • License
    Other
  • Created 5 months ago
  • Updated 3 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

DynamiCrafter: Animating Open-domain Images with Video Diffusion Priors

DynamiCrafter: Animating Open-domain Images with Video Diffusion Priors

ย  ย  ย 
Open in OpenXLabย ย  ย ย  ย 

Jinbo Xing, Menghan Xia*, Yong Zhang, Haoxin Chen, Wangbo Yu,
Hanyuan Liu, Xintao Wang, Tien-Tsin Wong*, Ying Shan


(* corresponding authors)

From CUHK and Tencent AI Lab.

๐Ÿ”† Introduction

๐Ÿ”ฅ๐Ÿ”ฅ New Update Rolls Out for DynamiCrafter! Better Dynamic, Higher Resolution, and Stronger Coherence!

๐Ÿค— DynamiCrafter can animate open-domain still images based on text prompt by leveraging the pre-trained video diffusion priors. Please check our project page and paper for more information.
๐Ÿ˜€ We will continue to improve the model's performance.

๐Ÿ‘€ Seeking comparisons with Stable Video Diffusion and PikaLabs? Click the image below.

1.1. Showcases (576x1024)

1.2. Showcases (320x512)

1.3. Showcases (256x256)

"bear playing guitar happily, snowing" "boy walking on the street"

2. Applications

2.1 Storytelling video generation (see project page for more details)

2.2 Looping video generation

2.3 Generative frame interpolation

Input starting frame Input ending frame Generated video

๐Ÿ“ Changelog

  • [2024.02.05]: ๐Ÿ”ฅ๐Ÿ”ฅ Release high-resolution models (320x512 & 576x1024).
  • [2023.12.02]: Launch the local Gradio demo.
  • [2023.11.29]: Release the main model at a resolution of 256x256.
  • [2023.11.27]: Launch the project page and update the arXiv preprint.

๐Ÿงฐ Models

Model Resolution GPU Mem. & Inference Time (A100, ddim 50steps) Checkpoint
DynamiCrafter1024 576x1024 18.3GB & 75s (perframe_ae=True) Hugging Face
DynamiCrafter512 320x512 12.8GB & 20s (perframe_ae=True) Hugging Face
DynamiCrafter256 256x256 11.9GB & 10s (perframe_ae=False) Hugging Face

Currently, our DynamiCrafter can support generating videos of up to 16 frames with a resolution of 576x1024. The inference time can be reduced by using fewer DDIM steps.

GPU memory consumed on RTX 4090 reported by @noguchis in Twitter: 18.3GB (576x1024), 12.8GB (320x512), 11.9GB (256x256).

โš™๏ธ Setup

Install Environment via Anaconda (Recommended)

conda create -n dynamicrafter python=3.8.5
conda activate dynamicrafter
pip install -r requirements.txt

๐Ÿ’ซ Inference

1. Command line

  1. Download pretrained models via Hugging Face, and put the model.ckpt with the required resolution in checkpoints/dynamicrafter_[1024|512|256]_v1/model.ckpt.
  2. Run the commands based on your devices and needs in terminal.
  # Run on a single GPU:
  # Select the model based on required resolutions: i.e., 1024|512|320:
  sh scripts/run.sh 1024
  # Run on multiple GPUs for parallel inference:
  sh scripts/run_mp.sh 1024

2. Local Gradio demo

  1. Download the pretrained models and put them in the corresponding directory according to the previous guidelines.
  2. Input the following commands in terminal (choose a model based on the required resolution: 1024, 512 or 256).
  python gradio_app.py --res 1024

Community Extensions: ComfyUI (Thanks to chaojie).

๐Ÿ‘จโ€๐Ÿ‘ฉโ€๐Ÿ‘งโ€๐Ÿ‘ฆ Crafter Family

VideoCrafter1: Framework for high-quality video generation.

ScaleCrafter: Tuning-free method for high-resolution image/video generation.

TaleCrafter: An interactive story visualization tool that supports multiple characters.

LongerCrafter: Tuning-free method for longer high-quality video generation.

MakeYourVideo, might be a Crafter:): Video generation/editing with textual and structural guidance.

StyleCrafter: Stylized-image-guided text-to-image and text-to-video generation.

๐Ÿ˜‰ Citation

@article{xing2023dynamicrafter,
  title={DynamiCrafter: Animating Open-domain Images with Video Diffusion Priors},
  author={Xing, Jinbo and Xia, Menghan and Zhang, Yong and Chen, Haoxin and Yu, Wangbo and Liu, Hanyuan and Wang, Xintao and Wong, Tien-Tsin and Shan, Ying},
  journal={arXiv preprint arXiv:2310.12190},
  year={2023}
}

๐Ÿ™ Acknowledgements

We would like to thank AK(@_akhaliq) for the help of setting up hugging face online demo, and camenduru for providing the replicate & colab online demo.

๐Ÿ“ข Disclaimer

We develop this repository for RESEARCH purposes, so it can only be used for personal/research/non-commercial purposes.