LongAnimateDiff
Sapir Weissbuch, Naomi Ken Korem, Daniel Shalem, Yoav HaCohen | Lightricks Research
We are pleased to release the "LongAnimateDiff" model, which has been trained to generate videos with a variable frame count, ranging from 16 to 64 frames. This model is compatible with the original AnimateDiff model. For optimal results, we recommend using a motion scale of 1.15.
We release two models:
- The LongAnimateDiff model, capable of generating videos with frame counts ranging from 16 to 64. You can download the weights from either Google Drive or HuggingFace.
- A specialized model designed to generate 32-frame videos. This model typically produces higher quality videos compared to the LongAnimateDiff model supporting 16-64 frames. Please download the weights from Google Drive or HuggingFace.
Update: December 27, 2023
- We are releasing version 1.1 of the LongAnimateDiff model, which generates better videos of 64 frames.
Results
Installation and Usage
ComfyUI usage
You can run our models using the ComfyUI framework. The models can be conveniently placed in the 'AnimateDiff models' folder within your ComfyUI framework. You can run the graph below.
AnimateDiff codebase usage
Note: our models work better with motion scale > 1. Motion scale is not implemented in AnimteDiff git, thus using ComfyUI is recommended.
git clone https://github.com/guoyww/AnimateDiff.git
cd AnimateDiff
conda env create -f environment.yaml
conda activate animatediff
git clone https://github.com/Lightricks/LongAnimateDiff.git
bash download_bashscripts/5-RealisticVision.sh
- Download the model from Google Drive / HuggingFace
and place in
models/Motion_Module
. - Download sd-1-5 model from HuggingFace.
python -m scripts.animate --config LongAnimateDiff/configs/RealisticVision-32-animate.yaml --inference_config LongAnimateDiff/configs/long-inference.yaml --L 32 --pretrained_model_path {path to sd-1-5 base model}
To run the 64 frames model:
- Modify the temporal_position_encoding_max_len parameter in
LongAnimateDiff/configs/long-inference.yaml
to 128. - Download the model from Google Drive / HuggingFace
and place in
models/Motion_Module
. - Download epicRealismNaturalSin from civit.ai to
models/DreamBooth_LoRA/epicRealismNaturalSin.safetensors
.
python -m scripts.animate --config LongAnimateDiff/configs/EpicRealism-64-animate.yaml --inference_config LongAnimateDiff/configs/long-inference.yaml --L {select number from 32|48|64} --pretrained_model_path {path to sd-1-5 base model}
Disclaimer
This project is released for academic use. We disclaim responsibility for user-generated content. Users are solely liable for their actions. The project contributors are not legally affiliated with, nor accountable for, users' behaviors. Use the generative model responsibly, adhering to ethical and legal standards.