• Stars
    star
    142
  • Rank 258,495 (Top 6 %)
  • Language
    Python
  • License
    Other
  • Created over 2 years ago
  • Updated over 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

🔥DM-NeRF in PyTorch (ICLR 2023)

arXiv visitors License CC BY-NC-SA 4.0 Twitter Follow

DM-NeRF: 3D Scene Geometry Decomposition and Manipulation from 2D Images

Bing Wang, Lu Chen, Bo Yang*
Paper | Video | DM-SR

The architecture of our proposed DM-NeRF. Given a 3D point $\boldsymbol{p}$, we learn an object code through a series of loss functions using both 2D and 3D supervision signals.

1. Decomposition and Reconstruction:

2. Decomposition and Rendering:

3. Manipulation:

4. Installation

DM-NeRF uses a Conda environment that makes it easy to install all dependencies.

  1. Create the DM-NeRF Conda environment (Python 3.7) with miniconda.
conda create --name DM-NeRF python=3.7
conda activate DM-NeRF
  1. Install all dependencies by running:
pip install -r requirements.txt

4.1 Datasets

In this paper, we consider the following three different datasets:

(1) DM-SR

To the best of our knowledge, there is no existing 3D scene dataset suitable for quantitative evaluation of geometry manipulation. Therefore, we create a synthetic dataset with 8 types of different and complex indoor rooms, called DM-SR. The room types and designs follow Hypersim Dataset. Overall, we firstly render the static scenes, and then manipulate each scene followed by second round rendering. Each scene has a physical size of about 12x12x3 meters with around 8 objects. We will keep updating DM-SR for future research in the community.

(2) Replica

In this paper, we use 7 scenes office0, office2, office3, office4, room0, room1, room2 from the Replica Dataset. We request the authors of Semantic-NeRF to generate color images and 2D object masks with camera poses at 640x480 pixels for each of 7 scenes. Each scene has 59~93 objects with very diverse sizes. Details of camera settings and trajectories can be found here.

(3) ScanNet

In this paper, we use 8 scenes scene0010_00, scene0012_00, scene0024_00, scene0033_00, scene0038_00, scene0088_00, scene0113_00, scene0192_00 from the ScanNet Dataset.

4.2 Training

For the training of our standard DM-NeRF , you can simply run the following command with a chosen config file specifying data directory and hyper-params.

CUDA_VISIBLE_DEVICES=0 python -u train_dmsr.py --config configs/dmsr/train/study.txt

Other working modes and set-ups can be also made via the above command by choosing different config files.

4.3 Evaluation

In this paper, we use PSNR, SSIM, LPIPS for rendering evaluation, and mAPs for both decomposition and manipulation evluations.

(1) Decomposition

Quantitative Evaluation

For decomposition evaluation, you need choose a specific config file and then run:

CUDA_VISIBLE_DEVICES=0 python -u test_dmsr.py --config configs/dmsr/test/study.txt
Mesh Generation

For mesh generation, you can change the config file and then run:

CUDA_VISIBLE_DEVICES=0 python -u test_dmsr.py --config configs/dmsr/test/meshing.txt

(2) Manipulation

Quantitative Evaluation

We provide the DM-SR dataset for the quantitative evaluation of geometry manipulation.

Set the target object and desired manipulated settings in a sepcific config file. And then run:

CUDA_VISIBLE_DEVICES=0 python -u test_dmsr.py --config configs/dmsr/mani/study.txt --mani_mode translation
Qualitative Evaluation

For other qualitative evaluations, you can change the config file and then run:

CUDA_VISIBLE_DEVICES=0 python -u test_dmsr.py --config configs/dmsr/mani/demo_deform.txt

5. Video (Youtube)

Citation

If you find our work useful in your research, please consider citing:

  @article{wang2022dmnerf,
  title={DM-NeRF: 3D Scene Geometry Decomposition and Manipulation from 2D Images},
  author={Bing, Wang and Chen, Lu and Yang, Bo},
  journal={arXiv preprint arXiv:2208.07227},
  year={2022}
}

License

Licensed under the CC BY-NC-SA 4.0 license, see LICENSE.

Updates

  • 31/8/2022: Data release!
  • 25/8/2022: Code release!
  • 15/8/2022: Initial release!

Related Repos

  1. RangeUDF: Semantic Surface Reconstruction from 3D Point Clouds GitHub stars
  2. GRF: Learning a General Radiance Field for 3D Representation and Rendering GitHub stars
  3. 3D-BoNet: Learning Object Bounding Boxes for 3D Instance Segmentation on Point Clouds GitHub stars