• Stars
    star
    108
  • Rank 321,259 (Top 7 %)
  • Language
    Python
  • Created over 1 year ago
  • Updated over 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Patch-based 3D Natural Scene Generation from a Single Example (CVPR 2023)

โญ Generating (and Editing) diverse 3D natural scenes from a single example without any training.

Weiyu Li*, Xuelin Chen*โ€ , Jue Wang, Baoquan Chen

Project Page | ArXiv | Paper | Supp_material | Video

High-quality 3D scenes created by our method (background sky post-added)

Prerequisite

Setup environment

๐Ÿ˜ƒ We also provide a Dockerfile for easy installation, see Setup using Docker.

Clone this repository.

git clone [email protected]:wyysf-98/Sin3DGen.git

Install the required packages.

conda create -n Sin3DGen python=3.8
conda activate Sin3DGen
conda install -c pytorch pytorch=1.9.1 torchvision=0.10.1 cudatoolkit=10.2 && \
conda install -c bottler nvidiacub && \
pip install -r docker/requirements.txt
Data preparation

We provide some Plenoxels scenes and optimized mapping fields in link for a quick test. Please download and unzip to current folder. Then the folder should as following:

โ””โ”€โ”€ data
    โ””โ”€โ”€ DevilsTower
        โ”œโ”€โ”€ mapping_fields
        |   โ”œโ”€โ”€ ...
        |   โ””โ”€โ”€ sxxxxxx.npz     # Synthesized mapping fields
        โ””โ”€โ”€ ckpts
            โ”œโ”€โ”€ rgb_fps8.mp4    # Visualization of the scene
            โ”œโ”€โ”€ ckpt_reso.npz   # Plenoxels saving files
            โ””โ”€โ”€ mesh_reso.obj   # Extracted meshes
Use your own data*

Please refer to svox2 to prepare your own data. You can also use blender to render scenes as in NSVF.

* Note that all scenes must be inside a unit box centered at the origin, as mentioned in the paper.

Then you should get your scenes using our forked version Link.

The main differences of the original version are:

  • We made modifications to certain parts of opt.py to enable the preservation of intermediate checkpoint during the training process.
  • Add more stages during training in configuration.
git clone [email protected]:wyysf-98/svox2.git
cd svox2
./launch.sh {yout_data_name} 0 {yout_data_path} -c configs/syn_start_from_12.json

Quick inference demo

For local quick inference demo using optimized mapping field, you can use

python quick_inference_demo.py -m 'run' \
      --config './configs/default.yaml' \
      --exemplar './data/DevilsTower/ckpts' \
      --resume './data/DevilsTower/mapping_fields/s566239.npz' \
      --output './outputs/quick_inference_demo/DevilsTower_s566239' \
      --scene_reso '[512, 512, 512]' # resolution for visualization, change to '[384, 384, 384]' or lower when OOM

Optimization

We provide a colab for a demo

We use a NVIDIA Tesla V100 with 32 GB Ram to generate the novel scenes, which takes about 10 mins as mentioned in our paper.

python generate.py -m 'run' \
      --config './configs/default.yaml' \
      --exemplar './data/DevilsTower/ckpts' \

if you encounter OOM problem, try to reduce pyr_reso for synthesis by adding --pyr_reso [ 16, 21, 28, 38, 51, 68, 91] or the scene_reso for visualization by adding --scene_reso [216, 216, 216]. For more configurations, please refer to the comments in the configs/default.yaml.

Evaluation

We provide the relevant code for evaluating the metrics (SIFID, SIMMD, image_diversity, scene_diversity), please change the evaluation script based on your actual situation.

cd evaluation
python compute_metrics.py --exp {out_path} \
                          --img_gt {GT_images_path} \
                          --mesh_gt {GT_mesh_path} \
                          --out_dir ./results/{exp_name}

Acknowledgement

The implementation of exact_search.py and evaluation for images partly took reference from Efficient-GPNN. We thank the authors for their generosity to release code.

Citation

If you find our work useful for your research, please consider citing using the following BibTeX entry.

@article{weiyu23Sin3DGen,
    author    = {Weiyu Li and Xuelin Chen and Jue Wang and Baoquan Chen},
    title     = {Patch-based 3D Natural Scene Generation from a Single Example},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    year      = {2023},
}