• Stars
    star
    258
  • Rank 158,189 (Top 4 %)
  • Language
    Python
  • License
    MIT License
  • Created over 4 years ago
  • Updated about 3 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Official Code: 3D Scene Reconstruction from a Single Viewport

Single View Reconstruction

3D Scene Reconstruction from a Single Viewport

Maximilian Denninger and Rudolph Triebel

Accepted paper at ECCV 2020. paper, short-video, long-video

The author (Maximilian Denninger) gave a talk about the paper, which can be found here.

Overview

data overview image

Abstract

We present a novel approach to infer volumetric reconstructions from a single viewport, based only on a RGB image and a reconstructed normal image. To overcome the problem of reconstructing regions in 3D that are occluded in the 2D image, we propose to learn this information from synthetically generated high-resolution data. To do this, we introduce a deep network architecture that is specifically designed for volumetric TSDF data by featuring a specific tree net architecture. Our framework can handle a 3D resolution of 512³ by introducing a dedicated compression technique based on a modified autoencoder. Furthermore, we introduce a novel loss shaping technique for 3D data that guides the learning process towards regions where free and occupied space are close to each other. As we show in experiments on synthetic and realistic benchmark data, this leads to very good reconstruction results, both visually and in terms of quantitative measures.

Content description

This repository contains everything necessary to reproduce the results presented in our paper. This includes the generation of the data and the training of our model. Be aware, that the generation of the data is time consuming as each process is optimized to the maximum but still billions of truncated signed distance values and weights have to be calculated. Including of course all the color and normals images. The data used for the training of our model was after compression around 1 TB big.

As SUNCG is not longer available, we can not upload the data, we used for training as it falls under the the SUNCG blocking. If you do not have access to the SUNCG dataset, you can try using the 3D-Front dataset and change the code to match this new dataset.

Citation

If you find our work useful, please cite us with:

@inproceedings{denninger2020,
  title={3D Scene Reconstruction from a Single Viewport},
  author={Denninger, Maximilian and Triebel, Rudolph},
  booktitle={Proceedings of the European Conference on Computer Vision (ECCV)},
  year={2020}
}

Environment

Before you execute any of the modules in this project, please install the conda environment:

conda env create -f environment.yml

This will create the SingleViewReconstruction environment, you can use it by:

conda activate SingleViewReconstruction

This uses Tensorflow 1.15 with python 3.7. This also includes some OpenGL packages for the visualizer.

Quick and easy complete run of the pipeline

There is a script, which provides a full run of the BlenderProc pipeline, you will need the "SingleViewReconstruction" environment.

But, be aware before you executed this script. That it will execute a lot of code and download a lot of stuff to your PC.

This program will download BlenderProc and then afterwards blender. It will also download the SceneNet dataset and the corresponding texture lib used by SceneNet. It will render some color & normal images for the pipeline and will also generate a true output voxelgrid to compare the results to best possible.

Before running, this make sure that you adapt the SDFGen/CMakeLists.txt file. See this README.md.

python run_on_example_scenes_from_scenenet.py

This will take a while and afterwards you can look at the generated scene with:

python TSDFRenderer/visualize_tsdf.py BlenderProc/output_dir/output_0.hdf5

Data generation

This is a quick overview over the data generation process, it is all based on the SUNCG house files.

data overview image

  1. The SUNCG house.json file is converted with the SUNCGToolBox in a house.obj and camerapositions file, for more information: SUNCG
  2. Then, these two files are used to generate the TSDF voxelgrids, for more information: SDFGen
  3. The voxelgrid is used to calculate the loss weights via the LossCalculatorTSDF
  4. They are used to first the train an autoencoder and then compress the 512³ voxelgrids down to a size of 32³x64, which we call encoded. See CompressionAutoEncoder.
  5. Now only the color & normal images are missing, for that we use BlenderProc with the config file defined in here.

These are then combined with this script to several tf records, which are then used to train our SingleViewReconstruction network.

Download of the trained models

We provide a script to easily download all models trained in this approach:

  1. The SingleViewReconstruction model
  2. The Autoencoder Compression Model CompressionAutoEncoder
  3. The Normal Generation Model UNetNormalGen
python download_models.py

More Repositories

1

stable-baselines3

PyTorch version of Stable Baselines, reliable implementations of reinforcement learning algorithms.
Python
8,960
star
2

BlenderProc

A procedural Blender pipeline for photorealistic training image generation
Python
2,083
star
3

rl-baselines3-zoo

A training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included.
Python
2,034
star
4

3DObjectTracking

Algorithms and Publications on 3D Object Tracking
C++
710
star
5

AugmentedAutoencoder

Official Code: Implicit 3D Orientation Learning for 6D Object Detection from RGB Images
Python
318
star
6

RAFCON

RAFCON (RMC advanced flow control) uses hierarchical state machines, featuring concurrent state execution, to represent robot programs. It ships with a graphical user interface supporting the creation of state machines and contains IDE like debugging mechanisms. Alternatively, state machines can programmatically be generated using RAFCON's API.
Python
180
star
7

rl-trained-agents

A collection of pre-trained RL agents using Stable Baselines3
Python
102
star
8

oaisys

Python
52
star
9

granite

C++
49
star
10

instr

code of paper „Unknown Object Segmentation from Stereo Images“, IROS 2021
Python
43
star
11

curvature

Official Code: Estimating Model Uncertainty of Neural Networks in Sparse Information Form, ICML2020.
Python
23
star
12

UMF

Python
17
star
13

amp

Point-to-point motion planning library for articulated robots.
C++
13
star
14

DistinctNet

"What's This?" - Learning to Segment Unknown Objects from Manipulation Sequences
Python
11
star
15

GRACE

Graph Assembly processing networks for robotic assembly sequence planning and feasibility learning
Python
10
star
16

rosmc

ROS Mission Control (ROSMC) -- A high-level mission designining and monitoring tool with intuitive graphical interfaces
Python
9
star
17

python-jsonconversion

Convert arbitrary Python objects into JSON strings and back.
Python
8
star
18

moegplib

Official Code: Trust Your Robots! Predictive Uncertainty Estimation of Neural Networks with Sparse Gaussian Processes
Python
8
star
19

ExReNet

Learning to Localize in New Environments from Synthetic Training Data
Python
7
star
20

RAFCON-ros-state-machines

RAFCON state machine examples using the ROS middleware
Python
5
star
21

python-yaml-configuration

Python
4
star
22

BayesSim2Real

Source code for IROS 2022 paper: Bayesian Active Learning for Sim-to-Real Robotic Perception.
Python
4
star
23

TendonDrivenContinuum

4
star
24

multicam_dataset_reader

C++
2
star
25

stios-utils

utility functions for the [Stereo Instance on Surfaces (STOIS) dataset
Python
2
star
26

SemanticSingleViewReconstruction

C++
1
star
27

rafcon-task-planner-plugin

A Plugin for RAFCON to interface arbitrary PDDL Planner.
Python
1
star
28

RECALL

Code and image database for IROS2022 paper "RECALL: Rehearsal-free Continual Learning for Object Classification". A algorithm to learn new object categories on the fly without forgetting the old ones and without the need to save previous images.
Python
1
star