• Stars
    star
    939
  • Rank 48,667 (Top 1.0 %)
  • Language
    Python
  • License
    Other
  • Created over 2 years ago
  • Updated over 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

[SIGGRAPH 2022 Journal Track] AvatarCLIP: Zero-Shot Text-Driven Generation and Animation of 3D Avatars

AvatarCLIP: Zero-Shot Text-Driven Generation and Animation of 3D Avatars

Fangzhou Hong1*โ€ƒ Mingyuan Zhang1*โ€ƒ Liang Pan1โ€ƒ Zhongang Cai1,2,3โ€ƒ Lei Yang2โ€ƒ Ziwei Liu1+
1S-Lab, Nanyang Technological Universityโ€ƒ 2SenseTime Researchโ€ƒ 3Shanghai AI Laboratory
*equal contributionโ€ƒ +corresponding author

Accepted to SIGGRAPH 2022 (Journal Track)

TL;DR

AvatarCLIP generate and animate avatars given descriptions of body shapes, appearances and motions.

A tall and skinny female soldier that is arguing. A skinny ninja that is raising both arms. An overweight sumo wrestler that is sitting. A tall and fat Iron Man that is running.

This repository contains the official implementation of AvatarCLIP: Zero-Shot Text-Driven Generation and Animation of 3D Avatars.


[Project Page] โ€ข [arXiv] โ€ข [High-Res PDF (166M)] โ€ข [Supplementary Video] โ€ข [Colab Demo]

Updates

[09/2022] ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅIf you are looking for a higher-quality 3D human generation method, go checkout our new work EVA3D!๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅ

[09/2022] ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅIf you are looking for a higher-quality text2motion method, go checkout our new work MotionDiffuse!๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅ

[07/2022] Code release for motion generation part!

[05/2022] Paper uploaded to arXiv. arXiv

[05/2022] Add a Colab Demo for avatar generation! Open In Colab

[05/2022] Support converting the generated avatar to the animatable FBX format! Go checkout how to use the FBX models. Or checkout the instructions for the conversion codes.

[05/2022] Code release for avatar generation part!

[04/2022] AvatarCLIP is accepted to SIGGRAPH 2022 (Journal Track)๐Ÿฅณ!

Citation

If you find our work useful for your research, please consider citing the paper:

@article{hong2022avatarclip,
    title={AvatarCLIP: Zero-Shot Text-Driven Generation and Animation of 3D Avatars},
    author={Hong, Fangzhou and Zhang, Mingyuan and Pan, Liang and Cai, Zhongang and Yang, Lei and Liu, Ziwei},
    journal={ACM Transactions on Graphics (TOG)},
    volume={41},
    number={4},
    articleno={161},
    pages={1--19},
    year={2022},
    publisher={ACM New York, NY, USA},
    doi={10.1145/3528223.3530094},
}

Use Generated FBX Models

Download

Go visit our project page. Go to the section 'Avatar Gallery'. Pick a model you like. Click 'Load Model' below. Click 'Download FBX' link at the bottom of the pop-up viewer.

Import to Your Favourite 3D Software (e.g. Blender, Unity3D)

The FBX models are already rigged. Use your motion library to animate it!

Upload to Mixamo

To make use of the rich motion library provided by Mixamo, you can also upload the FBX model to Mixamo. The rigging process is completely automatic!

Installation

We recommend using anaconda to manage the python environment. The setup commands below are provided for your reference.

git clone https://github.com/hongfz16/AvatarCLIP.git
cd AvatarCLIP
conda create -n AvatarCLIP python=3.7
conda activate AvatarCLIP
conda install pytorch==1.7.0 torchvision==0.8.0 torchaudio==0.7.0 cudatoolkit=10.1 -c pytorch
pip install -r requirements.txt

Other than the above steps, you should also install neural_renderer following its instructions. Before compiling neural_renderer (or after compiling should also be fine), remember to add the following three lines to neural_renderer/perspective.py after line 19.

x[z<=0] = 0
y[z<=0] = 0
z[z<=0] = 0

This quick fix is for a rendering issue where objects behide the camera will also be rendered. Be careful when using this fixed version of neural_renderer on your other projects, because this fix will cause the rendering process not differentiable.

To support offscreen rendering for motion visualization, you should install osmesa library.

conda install -c menpo osmesa

Data Preparation

Download SMPL Models

Register and download SMPL models here. Put the downloaded models in the folder smpl_models. The folder structure should look like

./
โ”œโ”€โ”€ ...
โ””โ”€โ”€ smpl_models/
    โ”œโ”€โ”€ smpl/
        โ”œโ”€โ”€ SMPL_FEMALE.pkl
        โ”œโ”€โ”€ SMPL_MALE.pkl
        โ””โ”€โ”€ SMPL_NEUTRAL.pkl

Download Pretrained Models & Other Data

This download is only for coarse shape generation and motion generation. You can skip if you only want to use other parts. Download the pretrained weights and other required data here. Put them in the folder AvatarGen so that the folder structure should look like

./
โ”œโ”€โ”€ ...
โ””โ”€โ”€ AvatarGen/
    โ””โ”€โ”€ ShapeGen/
        โ””โ”€โ”€ data/
            โ”œโ”€โ”€ codebook.pth
            โ”œโ”€โ”€ model_VAE_16.pth
            โ”œโ”€โ”€ nongrey_male_0110.jpg
            โ”œโ”€โ”€ smpl_uv.mtl
            โ””โ”€โ”€ smpl_uv.obj

Pretrained weights and human texture for motion generation can be downloaded here. Note that the human texture we used to render poses is from SURREAL dataset. Besides, you should download pretrained weights of VPoser v2.0. Put them in the folder AvatarAnimate so that the folder structure should look like

โ”œโ”€โ”€ ...
โ””โ”€โ”€ AvatarAnimate/
    โ””โ”€โ”€ data/
        โ”œโ”€โ”€ codebook.pth
        โ”œโ”€โ”€ motion_vae.pth
        โ”œโ”€โ”€ pose_realnvp.pth
        โ”œโ”€โ”€ nongrey_male_0110.jpg
        โ”œโ”€โ”€ smpl_uv.mtl
        โ”œโ”€โ”€ smpl_uv.obj
        โ””โ”€โ”€ vposer
            โ”œโ”€โ”€ V02_05.log
            โ”œโ”€โ”€ V02_05.yaml
            โ””โ”€โ”€ snapshots
                โ”œโ”€โ”€ V02_05_epoch=08_val_loss=0.03.ckpt
                โ””โ”€โ”€ V02_05_epoch=13_val_loss=0.03.ckpt
        

Avatar Generation

Coarse Shape Generation

Folder AvatarGen/ShapeGen contains codes for this part. Run the follow command to generate the coarse shape corresponding to the shape description 'a strong man'. We recommend to use the prompt augmentation 'a 3d rendering of xxx in unreal engine' for better results. The generated coarse body mesh will be stored under AvatarGen/ShapeGen/output/coarse_shape.

python main.py --target_txt 'a 3d rendering of a strong man in unreal engine'

Then we need to render the mesh for initialization of the implicit avatar representation. Use the following command for rendering.

python render.py --coarse_shape_obj output/coarse_shape/a_3d_rendering_of_a_strong_man_in_unreal_engine.obj --output_folder ${RENDER_FOLDER}

Shape Sculpting and Texture Generation

Note that all the codes are tested on NVIDIA V100 (32GB memory). Therefore, in order to run on GPUs with lower memory, please try to scale down the network or tune down max_ray_num in the config files. You can refer to confs/examples_small/example.conf or our colab demo for a scale-down version of AvatarCLIP.

Folder AvatarGen/AppearanceGen contains codes for this part. We provide data, pretrained model and scripts to perform shape sculpting and texture generation on a zero-beta body (mean shape defined by SMPL). We provide many example scripts under AvatarGen/AppearanceGen/confs/examples. For example, if we want to generate 'Abraham Lincoln', which is defined in the config file confs/examples/abrahamlincoln.conf, use the following command.

python main.py --mode train_clip --conf confs/examples/abrahamlincoln.conf

Results will be stored in AvatarCLIP/AvatarGen/AppearanceGen/exp/smpl/examples/abrahamlincoln.

If you wish to perform shape sculpting and texture generation on the previously generated coarse shape. We also provide example config files in confs/base_models/astrongman.conf confs/astrongman/*.conf. Two steps of optimization are required as follows.

# Initilization of the implicit avatar
python main.py --mode train --conf confs/base_models/astrongman.conf
# Shape sculpting and texture generation on the initialized implicit avatar
python main.py --mode train_clip --conf confs/astrongman/hulk.conf

Marching Cube

To extract meshes from the generated implicit avatar, one may use the following command.

python main.py --mode validate_mesh --conf confs/examples/abrahamlincoln.conf

The final high resolution mesh will be stored as AvatarCLIP/AvatarGen/AppearanceGen/exp/smpl/examples/abrahamlincoln/meshes/00030000.ply

Convert Avatar to FBX Format

For the convenience of using the generated avatar with modern graphics pipeline, we also provide scripts to rig the avatar and convert to FBX format. See the instructions here.

Motion Generation

Candidate Poses Generation

Here we provide four different methods for pose generation.

  1. PoseOptimizer: directly optimize on SMPL theta

  2. VPoserOptimizer: optimize the latent space of VPoser

  3. VPoserRealNVP: get latent codes of VPoser from pretrained conditional RealNVP

  4. VPoserCodebook: select the most similar poses to the given text feature

We provide configurations to compare these methods. Here are some examples:

# Suppose your current location is `AvatarCLIP/AvatarAnimate`

# Use PoseOptimizer method to generate poses for "arguing"
python main.py --conf confs/pose_ablation/pose_optimizer/argue.conf
# Results are stored in `AvatarCLIP/AvatarAnimate/exp/pose_ablation/pose_optimizer/argue` directory
# candidate_0.jpg, candidate_1.jpg, ..., candidate_4.jpg are the top-5 poses
# candidate_0.npy, candidate_1.npy, ..., candidate_4.npy are corresponding parameters

# Use VPoserOptimizer method to generate poses for "praying"
python main.py --conf confs/pose_ablation/vposer_optimizer/pray.conf
# Results are stored in `AvatarCLIP/AvatarAnimate/exp/pose_ablation/vposer_optimizer/pray` directory

# Use VPoserRealNVP method to generate poses for "shooting a basketball"
python main.py --conf confs/pose_ablation/vposer_realnvp/shoot_basketball.conf
# Results are stored in `AvatarCLIP/AvatarAnimate/exp/pose_ablation/vposer_realnvp/shoot_basketball` directory

# Use VPoserCodebook method to generate poses for "running"
python main.py --conf confs/pose_ablation/vposer_codebook/run.conf
# Results are stored in `AvatarCLIP/AvatarAnimate/exp/pose_ablation/vposer_codebook/run` directory

Motion Generation

Here we provide three different methods for motion generation.

  1. MotionInterpolation: directly interpolate between given poses

  2. MotionOptimizer (baseline): optimize latent code of a pretrained VAE with a simple reconstruction loss

  3. MotionOptimizer (ours): optimize latent code of a pretrained VAE with weighted reconstruction loss, delta loss, and clip loss

We provide configurations to compare these methods. Here are some examples:

# Suppose your current location is `AvatarCLIP/AvatarAnimate`

# Use MotionInterpolation method to generate motion for "arguing"
python main.py --conf confs/motion_ablation/interpolation/argue.conf
# Results are stored in `AvatarCLIP/AvatarAnimate/exp/motion_ablation/interpolation/argue` directory
# candidate_0.jpg, candidate_1.jpg, ..., candidate_4.jpg are the top-5 poses
# candidate_0.npy, candidate_1.npy, ..., candidate_4.npy are corresponding parameters
# motion.mp4 is the generated motion
# motion.npy is corresponding parameters

# Use MotionOptimizer (baseline) method to generate motion for "praying"
python main.py --conf confs/motion_ablation/baseline/pray.conf
# Results are stored in `AvatarCLIP/AvatarAnimate/exp/motion_ablation/baseline/pray` directory

# Use MotionOptimizer (ours) method to generate motion for "shooting a basketball"
python main.py --conf confs/motion_ablation/motion_optimizer/shoot_basketball.conf
# Results are stored in `AvatarCLIP/AvatarAnimate/exp/motion_ablation/motion_optimizer/shoot_basketball` directory

Make your own configure

Each configuration contains three independent parts: general setting, pose generator, and motion generator.

# General Setting
general {
    # describe the results path
    base_exp_dir = ./exp/motion_ablation/motion_optimizer/raise_arms

    # if you only want to generate poses, then you can set "mode = pose".
    mode = motion

    # define your prompt. We highly recommend using the format "a rendered 3d man is xxx"
    text = a rendered 3d man is raising both arms
}

# Pose Generator
pose_generator {
    type = VPoserCodebook
    # you can change the number of candidate poses by setting "topk = 10"
    # for PoseOptimizer and VPoserOptimizer, you can further define the number of iterations and the optimizer type
}

# Motion Generator
# if "mode = pose", you can ignore this part
motion_generator {
    type = MotionOptimizer
    # you can further modify the coefficient of each loss. 
    # for example, if you find the generated motion is very intensive, you can reduce the coefficient of delta loss.
}


License

Distributed under the S-Lab License. See LICENSE for more information.

Related Works

There are lots of wonderful works that inspired our work or came around the same time as ours.

Dream Fields enables zero-shot text-driven general 3D object generation using CLIP and NeRF.

Text2Mesh proposes to edit a template mesh by predicting offsets and colors per vertex using CLIP and differentiable rendering.

CLIP-NeRF can manipulate 3D objects represented by NeRF with natural languages or examplar images by leveraging CLIP.

Text to Mesh facilitates zero-shot text-driven general mesh generation by deforming from a sphere mesh guided by CLIP.

MotionCLIP establishes a projection from the CLIP text space to the motion space through supervised training, which leads to amazing text-driven motion generation results.

Acknowledgements

This study is supported by NTU NAP, MOE AcRF Tier 2 (T2EP20221-0033), and under the RIE2020 Industry Alignment Fund โ€“ Industry Collaboration Projects (IAF-ICP) Funding Initiative, as well as cash and in-kind contribution from the industry partner(s).

We thank the following repositories for their contributions in our implementation: NeuS, smplx, vposer, Smplx2FBX.

More Repositories

1

EVA3D

[ICLR 2023 Spotlight] EVA3D: Compositional 3D Human Generation from 2D Image Collections
Python
458
star
2

DS-Net

[CVPR 2021] Rank 1st in the public leaderboard of SemanticKITTI Panoptic Segmentation (2020-11-16)
Python
219
star
3

Garment4D

[NeurIPS 2021] Garment4D: Garment Reconstruction from Point Cloud Sequences
Python
121
star
4

HCMoCo

[CVPR 2022 Oral] Versatile Multi-Modal Pre-Training for Human-Centric Perception
Python
110
star
5

Pointnet-TextCNN

Point cloud classification and segmentation based on pointnet and textcnn
Python
5
star
6

AutomonousDrivingHW

Homework of Autonomous Driving
C++
4
star
7

Tornado-Monitor-System

2018 Summer Python Project. An Intelligent Monitor Platform Based on Tornado.
Python
4
star
8

neural_renderer

Modified version of neural_renderer for AvatarCLIP
Python
3
star
9

AssemblyPlayStation

This is a repo for our assembly project which is a bootable os with multiple games.
Assembly
3
star
10

ShadowCompiler

Project for Computer Network(2) -- Compiler Part
C++
2
star
11

xv6-e1000-driver

add nic support for xv6
C
2
star
12

minimum-ftp

A simple ftp server and client which only implement minimum function.
Python
2
star
13

FZLambda

Lambda Expression Evaluation and Type Check Implemented by Haskell
Haskell
2
star
14

2017_spring_notepad_playground

C++
1
star
15

mysite-LearningDjango

A simple blog build by django while learn it.
Python
1
star
16

DA-A2

Data Structure & Algorithm 2
C++
1
star
17

Notepad

A very simple text editor based on MFC (Software Engineering 2 team project)
HTML
1
star
18

Data-Structure-2-Project

Repository for Data Structure and Algorithm projects
C++
1
star
19

Astroid-Wechat-Game

Classic game Astroid in Wechat-Game platform
JavaScript
1
star
20

20171021-ETC-RiYueShenJiao

1
star
21

FlightControl

A fight control system based on STM32
C
1
star
22

xv6-comment

Commented xv6 source code.
C
1
star
23

TfMnistDataset

TensorFlow Mnist Dataset
1
star
24

DataStructureAndAlgorithm-1-

2017็ง‹ๅญฃๅญฆๆœŸๆ•ฐๆฎ็ป“ๆž„ไธŽ็ฎ—ๆณ•๏ผˆ1๏ผ‰ๅฎž้ชŒ
C++
1
star
25

2016_nand2tetris

Assembly
1
star
26

WeChatTicket-THSSOJ

Python
1
star