• Stars
    star
    1,284
  • Rank 36,642 (Top 0.8 %)
  • Language
    Python
  • License
    Other
  • Created over 1 year ago
  • Updated 4 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Auto1111 extension implementing text2video diffusion models (like ModelScope or VideoCrafter) using only Auto1111 webui dependencies

text2video Extension for AUTOMATIC1111's StableDiffusion WebUI

Auto1111 extension implementing various text2video models, such as ModelScope and VideoCrafter, using only Auto1111 webui dependencies and downloadable models (so no logins required anywhere)

Requirements

ModelScope

6 GBs vram should be enough to run on GPU with low vram vae on at 256x256 (and we are already getting reports of people launching 192x192 videos with 4gbs of vram). 24 frames long 256x256 video definitely fits into 12gbs of NVIDIA GeForce RTX 2080 Ti, or if you have a Torch2 attention optimization supported videocard, you can fit the whopping 125 frames (8 seconds) long video into the same 12 GBs of VRAM! 250 frames (16 seconds) in the same conditions take 20 gbs.

Prompt: best quality, anime girl dancing

exampleUntitled.mp4

We will appreciate any help with this extension, especially pull-requests.

LoRA Support

Currently, there is support for trained LoRAs using this finetune repository. Please follow instructions there on how to train them. https://github.com/ExponentialML/Text-To-Video-Finetuning#updates

After training, simply place them into your default LoRA directory defined by your webui installation.

VideoCrafter (WIP, needs more devs to maintain properly as well)

VideoCrafter runs with around 9.2 GBs of VRAM with the settings set on Default.

Major changes between versions

Update 2023-03-27: VAE settings and "Keep model in VRAM" moved to general webui setting under 'ModelScopeTxt2Vid' section.

Update 2023-03-26: prompt weights implemented! (ModelScope only yet, as of 2023-04-05)

Update 2023-04-05: added VideoCrafter support, renamed the extension to plainly 'sd-webui-text2video'

Update 2023-04-13: in-framing/in-painting support: allows to 'animate' an existing pic or even seamlessly loop the videos!

Update 2023-04-15: MEGA-UPDATE: Torch2/xformers optimizations, possible to make 125 frames long video on 12 gbs of VRAM. CPU offloading doesn't happen now if keep_pipe_in_vram is checked.

Update 2023-04-16: WebAPI is available!

Update 2023-07-02: Alternate samplers, model hotswitch.

Test examples:

ModelScope

Prompt: cinematic explosion by greg rutkowski

vid.mp4

Prompt: really attractive anime girl skating, by makoto shinkai, cinematic lighting

gosh.mp4

'Continuing' an existing image

Prompt: best quality, astronaut dog

egUntitled.mp4

Prompt: explosion

expl.mp4

In-painting and looping back the videos

Prompt: nuclear explosion

galaxybrain.mp4

Prompt: best quality, lots of cheese

matcheeseUntitled.mp4

VideoCrafter

Prompt: anime 1girl reimu touhou

working.mp4

Where to get the weights

ModelScope

Download the following files from the original HuggingFace repository. Alternatively, download half-precision fp16 pruned weights (they are smaller and use less vram on loading):

  • VQGAN_autoencoder.pth
  • configuration.json
  • open_clip_pytorch_model.bin
  • text2video_pytorch_model.pth

And put them in stable-diffusion-webui/models/ModelScope/t2v. Create those 2 folders if they are missing.

VideoCrafter

Download pretrained T2V models either via this link or download the pruned half precision weights, and put the model.ckpt in models/VideoCrafter/model.ckpt.

Fine-tunes and how to use them

Thanks to https://github.com/ExponentialML/Text-To-Video-Finetuning you can fine-tune your models!

To utilize a fine-tuned model here, use this script which will convert the Diffusers-formatted model that repo outputs into the original weights format.

Prominent Fine-tunes

ZeroScope v2

Trained by @cerspense on high quality YouTube videos. Download the files from the folder named zs2_XL at cerspense/zeroscope_v2_XL and then add the missing VQGAN_autoencoder.pth and configuration.json from any other ModelScope model.

paradot.mp4

Potat1

Potat1 is a ModelScope-based model trained by @camenduru on 2197 clips with the resolution of 1024x576 which makes it the first open source hi-res text2video model.

vid.2.mp4

To download the plug-and-play weights for the extension use this link https://huggingface.co/kabachuha/potat1-with-text-encoder-original-format.

Animov-0.1

Animov-0.1 by strangeman3107. The converted weights for this model reside here.

w.mp4

Screenshots

txt2vid with img2vid

Screenshot 2023-04-15 at 17-53-36 Stable Diffusion

vid2vid

Screenshot 2023-04-15 at 17-33-32 Stable Diffusion

Dev resources

ModelScope

HuggingFace space:

https://huggingface.co/spaces/damo-vilab/modelscope-text-to-video-synthesis

The model PyTorch implementation from ModelScope:

https://github.com/modelscope/modelscope/tree/master/modelscope/models/multi_modal/video_synthesis

Google Colab from the devs:

https://colab.research.google.com/drive/1uW1ZqswkQ9Z9bp5Nbo5z59cAn7I0hE6R?usp=sharing

VideoCrafter

Github:

https://github.com/VideoCrafter/VideoCrafter

More Repositories

1

InfiNet

Implementation of DiffusionOverDiffusion architecture presented in NUWA-XL in a form of ControlNet-like module on top of ModelScope text2video model for extremely long video generation.
Python
86
star
2

kan-diffusion

Jupyter Notebook
33
star
3

MinecraftEcologyMod

Mod, that adds pollution and climate changing system to Minecraft
Java
19
star
4

discord-rpc-for-automatic1111-webui

Silent extension (no tab) for AUTOMATIC1111's Stable Diffusion WebUI adding connection to Discord RPC, so it would show a fancy table in the Discord profile.
Python
19
star
5

nanoGPKANT

Testing KAN-based text generation GPT models
Jupyter Notebook
15
star
6

video2scenario

Recursively writes descriptions of video scenes using Large Language Models and Image Captioners
Python
13
star
7

sd-webui-diffpure

Auto1111 port of NVlab's adversarial purification method that uses the forward and reverse processes of diffusion models to remove adversarial perturbations
Python
13
star
8

content-aware-video-speedup

Speedups or slowdowns the video according to the bitrate of its parts
Python
7
star
9

inpainting-pre-generation

This script for AUTOMATIC1111's webui is used to firstly generate an image with a separate prompt (i.e. a background image) and then inpaint it with a regular pipeline
Python
6
star
10

The_Hammer_of_Thursagan_with_Bosses

The Hammer of Thursagan AtS-style, a Wesnoth addon
3
star
11

EcologyModCompat

An addon to MinecraftEcologyMod, which enhances its integration with other Minecraft mods.
Java
3
star
12

deforum-art-bot

Interactive Discord bot for making Deforum videos
Python
3
star
13

SPHL-for-stable-diffusion

Improving Diffusion Models's Data-Corruption Resistance using Scheduled Pseudo-Huber Loss
1
star
14

Ivyel-bot

Python
1
star
15

CloakBot

Discord bot deleting user messages if they don't have a spoiler
Python
1
star