๐ฌShow-1
Rui Zhaoโ Lingmin Ranโ Yuchao Guโ Difei Gaoโ Mike Zheng Shouโ
* Equal Contributionโ โ Corresponding Author
Project Page | arXiv | PDF | ๐ค Space | Colab | Replicate Demo
News
- [10/12/2023] Code and weights released!
Setup
Requirements
pip install -r requirements.txt
Note: PyTorch 2.0+ is highly recommended for more efficiency and speed on GPUs.
Weights
All model weights for Show-1 are available on Show Lab's HuggingFace page: Base Model (show-1-base), Interpolation Model (show-1-interpolation), and Super-Resolution Model (show-1-sr1, show-1-sr2).
Note that our show-1-sr1 incorporates the image super-resolution model from DeepFloyd-IF, DeepFloyd/IF-II-L-v1.0, to upsample the first frame of the video. To obtain the respective weights, follow their official instructions.
Usage
To generate a video from a text prompt, run the command below:
python run_inference.py
By default, the videos generated from each stage are saved to the outputs
folder in the GIF format. The script will automatically fetch the necessary model weights from HuggingFace. If you prefer, you can manually download the weights using git lfs and then update the pretrained_model_path
to point to your local directory. Here's how:
git lfs install
git clone https://huggingface.co/showlab/show-1-base
A demo is also available on the showlab/Show-1
๐ค Space.
You can use the gradio demo locally by running:
python app.py
Demo Video
Show-1.Demo.Video.mp4
Citation
If you make use of our work, please cite our paper.
@article{zhang2023show,
title={Show-1: Marrying Pixel and Latent Diffusion Models for Text-to-Video Generation},
author={Zhang, David Junhao and Wu, Jay Zhangjie and Liu, Jia-Wei and Zhao, Rui and Ran, Lingmin and Gu, Yuchao and Gao, Difei and Shou, Mike Zheng},
journal={arXiv preprint arXiv:2309.15818},
year={2023}
}
Commercial Use
We are working with the university (NUS) to figure out the exact paperwork needed for approving commercial use request. In the meantime, to speed up the process, we'd like to solicit intent of interest from community and later on we will process these requests with high priority. If you are keen, can you kindly email us at [email protected] and [email protected] to answer the following questions, if possible:
- Who are you / your company?
- What is your product / application?
- How Show-1 can benefit your product?
Shoutouts
- This work heavily builds on diffusers, deep-floyd/IF, modelscope, and zeroscope. Thanks for open-sourcing!
- Thanks @camenduru for providing the CoLab demo and @chenxwh for providing replicate demo.