• Stars
    star
    867
  • Rank 52,618 (Top 2 %)
  • Language
    Python
  • License
    Apache License 2.0
  • Created 11 months ago
  • Updated 4 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

[CVPR 2024] PIA, your Personalized Image Animator. Animate your images by text prompt, combing with Dreambooth, achieving stunning videos. PIA,你的个性化图像动画生成器,利用文本提示将图像变为奇妙的动画

PIA:Personalized Image Animator

PIA: Your Personalized Image Animator via Plug-and-Play Modules in Text-to-Image Models

Yiming Zhang†, Zhening Xing†, Yanhong Zeng, Youqing Fang, Kai Chen*

(*Corresponding Author, †Equal Contribution)

arXiv Project Page Open in OpenXLab Third Party Colab HuggingFace Model Open in HugginFace Replicate

You may also want to try other project from our team: MMagic

PIA is a personalized image animation method which can generate videos with high motion controllability and strong text and image alignment.

What's New

[2024/01/03] Add Replicate Demo & API!

[2024/01/03] Add third-party Colab!

[2023/12/28] PIA can animate a 1024x1024 image with just 16GB of GPU memory with scaled_dot_product_attention!

[2023/12/25] HuggingFace demo is available now! 🤗 Hub

[2023/12/22] Release the model and demo of PIA. Try it to make your personalized movie!

Setup

Prepare Environment

Use the following command to install Pytorch==2.0.0 and other dependencies:

conda env create -f environment-pt2.yaml
conda activate pia

If you want to use lower version of Pytorch (e.g. 1.13.1), you can use the following command:

conda env create -f environment.yaml
conda activate pia

We strongly recommand you to use Pytorch==2.0.0 which supports scaled_dot_product_attention for memory-efficient image animation.

Download checkpoints

  • Download the Stable Diffusion v1-5
  • git lfs install
    git clone https://huggingface.co/runwayml/stable-diffusion-v1-5 models/StableDiffusion/
    
  • Download Personalized Models
  • bash download_bashscripts/1-RealisticVision.sh
    bash download_bashscripts/2-RcnzCartoon.sh
    bash download_bashscripts/3-MajicMix.sh
    
  • Download PIA
  • bash download_bashscripts/0-PIA.sh
    

    You can also download pia.ckpt through link on Google Drive or HuggingFace.

    Put checkpoints as follows:

    └── models
        ├── DreamBooth_LoRA
        │   ├── ...
        ├── PIA
        │   ├── pia.ckpt
        └── StableDiffusion
            ├── vae
            ├── unet
            └── ...
    

    Usage

    Image Animation

    Image to Video result can be obtained by:

    python inference.py --config=example/config/lighthouse.yaml
    python inference.py --config=example/config/harry.yaml
    python inference.py --config=example/config/majic_girl.yaml
    

    Run the command above, you will get:

    Input Image

    lightning, lighthouse

    sun rising, lighthouse

    fireworks, lighthouse

    Input Image

    1boy smiling

    1boy playing the magic fire

    1boy is waving hands

    Input Image

    1girl is smiling

    1girl is crying

    1girl, snowing

    Motion Magnitude

    You can control the motion magnitude through the parameter magnitude:

    python inference.py --config=example/config/xxx.yaml --magnitude=0 # Small Motion
    python inference.py --config=example/config/xxx.yaml --magnitude=1 # Moderate Motion
    python inference.py --config=example/config/xxx.yaml --magnitude=2 # Large Motion

    Examples:

    python inference.py --config=example/config/labrador.yaml
    python inference.py --config=example/config/bear.yaml
    python inference.py --config=example/config/genshin.yaml

    Input Image
    & Prompt

    Small Motion

    Moderate Motion

    Large Motion

    a golden labrador is running
    1bear is walking, ...
    cherry blossom, ...

    Style Transfer

    To achieve style transfer, you can run the command(Please don't forget set the base model in xxx.yaml):

    Examples:

    python inference.py --config example/config/concert.yaml --style_transfer
    python inference.py --config example/config/ania.yaml --style_transfer

    Input Image
    & Base Model

    1man is smiling

    1man is crying

    1man is singing

    Realistic Vision
    RCNZ Cartoon 3d

    1girl smiling

    1girl open mouth

    1girl is crying, pout

    RCNZ Cartoon 3d

    Loop Video

    You can generate loop by using the parameter --loop

    python inference.py --config=example/config/xxx.yaml --loop

    Examples:

    python inference.py --config=example/config/lighthouse.yaml --loop
    python inference.py --config=example/config/labrador.yaml --loop

    Input Image

    lightning, lighthouse

    sun rising, lighthouse

    fireworks, lighthouse

    Input Image

    labrador jumping

    labrador walking

    labrador running

    AnimateBench

    We have open-sourced AnimateBench on HuggingFace which includes images, prompts and configs to evaluate PIA and other image animation methods.

    Contact Us

    Yiming Zhang: [email protected]

    Zhening Xing: [email protected]

    Yanhong Zeng: [email protected]

    Acknowledgements

    The code is built upon AnimateDiff, Tune-a-Video and PySceneDetect

    More Repositories

    1

    mmdetection

    OpenMMLab Detection Toolbox and Benchmark
    Python
    29,487
    star
    2

    mmsegmentation

    OpenMMLab Semantic Segmentation Toolbox and Benchmark.
    Python
    7,992
    star
    3

    mmagic

    OpenMMLab Multimodal Advanced, Generative, and Intelligent Creation Toolbox. Unlock the magic 🪄: Generative-AI (AIGC), easy-to-use APIs, awsome model zoo, diffusion models, for text-to-image generation, image/video restoration/enhancement, etc.
    Jupyter Notebook
    6,909
    star
    4

    mmcv

    OpenMMLab Computer Vision Foundation
    Python
    5,879
    star
    5

    mmpose

    OpenMMLab Pose Estimation Toolbox and Benchmark.
    Python
    5,625
    star
    6

    Amphion

    Amphion (/æmˈfaɪən/) is a toolkit for Audio, Music, and Speech Generation. Its purpose is to support reproducible research and help junior researchers and engineers get started in the field of audio, music, and speech generation research and development.
    Python
    5,482
    star
    7

    mmdetection3d

    OpenMMLab's next-generation platform for general 3D object detection.
    Python
    5,216
    star
    8

    OpenPCDet

    OpenPCDet Toolbox for LiDAR-based 3D Object Detection.
    Python
    4,658
    star
    9

    mmocr

    OpenMMLab Text Detection, Recognition and Understanding Toolbox
    Python
    4,270
    star
    10

    mmaction2

    OpenMMLab's Next Generation Video Understanding Toolbox and Benchmark
    Python
    4,236
    star
    11

    mmtracking

    OpenMMLab Video Perception Toolbox. It supports Video Object Detection (VID), Multiple Object Tracking (MOT), Single Object Tracking (SOT), Video Instance Segmentation (VIS) with a unified framework.
    Python
    3,538
    star
    12

    mmpretrain

    OpenMMLab Pre-training Toolbox and Benchmark
    Python
    3,383
    star
    13

    mmselfsup

    OpenMMLab Self-Supervised Learning Toolbox and Benchmark
    Python
    3,182
    star
    14

    mmyolo

    OpenMMLab YOLO series toolbox and benchmark. Implemented RTMDet, RTMDet-Rotated,YOLOv5, YOLOv6, YOLOv7, YOLOv8,YOLOX, PPYOLOE, etc.
    Python
    2,967
    star
    15

    mmskeleton

    A OpenMMLAB toolbox for human pose estimation, skeleton-based action recognition, and action synthesis.
    Python
    2,928
    star
    16

    mmdeploy

    OpenMMLab Model Deployment Framework
    Python
    2,744
    star
    17

    mmgeneration

    MMGeneration is a powerful toolkit for generative models, based on PyTorch and MMCV.
    Python
    1,881
    star
    18

    mmaction

    An open-source toolbox for action understanding based on PyTorch
    Python
    1,853
    star
    19

    mmrotate

    OpenMMLab Rotated Object Detection Toolbox and Benchmark
    Python
    1,843
    star
    20

    mmrazor

    OpenMMLab Model Compression Toolbox and Benchmark.
    Python
    1,470
    star
    21

    Multimodal-GPT

    Multimodal-GPT
    Python
    1,461
    star
    22

    mmfashion

    Open-source toolbox for visual fashion analysis based on PyTorch
    Python
    1,245
    star
    23

    mmhuman3d

    OpenMMLab 3D Human Parametric Model Toolbox and Benchmark
    Python
    1,232
    star
    24

    mmengine

    OpenMMLab Foundational Library for Training Deep Learning Models
    Python
    1,161
    star
    25

    playground

    A central hub for gathering and showcasing amazing projects that extend OpenMMLab with SAM and other exciting features.
    Python
    1,117
    star
    26

    OpenMMLabCourse

    OpenMMLab course index and stuff
    Jupyter Notebook
    1,000
    star
    27

    mmflow

    OpenMMLab optical flow toolbox and benchmark
    Python
    942
    star
    28

    mmfewshot

    OpenMMLab FewShot Learning Toolbox and Benchmark
    Python
    697
    star
    29

    PowerPaint

    [ECCV 2024] PowerPaint, a versatile image inpainting model that supports text-guided object inpainting, object removal, image outpainting and shape-guided object inpainting with only a single model. 一个高质量多功能的图像修补模型,可以同时支持插入物体、移除物体、图像扩展、形状可控的物体生成,只需要一个模型
    Python
    526
    star
    30

    awesome-vit

    400
    star
    31

    OpenUnReID

    PyTorch open-source toolbox for unsupervised or domain adaptive object re-ID.
    Python
    393
    star
    32

    labelbee-client

    Out-of-the-box Annotation Toolbox
    JavaScript
    380
    star
    33

    FoleyCrafter

    FoleyCrafter: Bring Silent Videos to Life with Lifelike and Synchronized Sounds. AI拟音大师,给你的无声视频添加生动而且同步的音效 😝
    Python
    379
    star
    34

    mim

    MIM Installs OpenMMLab Packages
    Python
    346
    star
    35

    denseflow

    Extracting optical flow and frames
    C++
    294
    star
    36

    mmeval

    A unified evaluation library for multiple machine learning libraries
    Python
    254
    star
    37

    MMGEN-FaceStylor

    Python
    249
    star
    38

    labelbee

    LabelBee is an annotation Library
    TypeScript
    244
    star
    39

    Live2Diff

    Live2Diff: A Pipeline that processes Live video streams by a uni-directional video Diffusion model.
    Python
    150
    star
    40

    OpenMMLabCamp

    Jupyter Notebook
    93
    star
    41

    polynet

    The Github Repo for PolyNet
    77
    star
    42

    CLUE

    C++ Lightweight Utility Extensions
    C++
    70
    star
    43

    AnyControl

    [ECCV 2024] AnyControl, a multi-control image synthesis model that supports any combination of user provided control signals. 一个支持用户自由输入控制信号的图像生成模型,能够根据多种控制生成自然和谐的结果!
    Python
    66
    star
    44

    StyleShot

    StyleShot: A SnapShot on Any Style. 一款可以迁移任意风格到任意内容的模型,无需针对图片微调,即能生成高质量的个性风格化图片!
    Python
    59
    star
    45

    mim-example

    Python
    58
    star
    46

    mmengine-template

    Python
    49
    star
    47

    ecosystem

    37
    star
    48

    mmstyles

    Latex style file to facilitate writing of technical papers
    TeX
    37
    star
    49

    mmpose-webcam-demo

    Python
    25
    star
    50

    pre-commit-hooks

    Python
    17
    star
    51

    mdformat-openmmlab

    Python
    6
    star
    52

    .github

    4
    star