• Stars
    star
    255
  • Rank 159,729 (Top 4 %)
  • Language
    Python
  • License
    Other
  • Created over 3 years ago
  • Updated about 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Code for "ATISS: Autoregressive Transformers for Indoor Scene Synthesis", NeurIPS 2021

ATISS: Autoregressive Transformers for Indoor Scene Synthesis

Example 1 Example 2 Example 3

This repository contains the code that accompanies our paper ATISS: Autoregressive Transformers for Indoor Scene Synthesis.

You can find detailed usage instructions for training your own models, using our pretrained models as well as performing the interactive tasks described in the paper below.

If you found this work influential or helpful for your research, please consider citing

@Inproceedings{Paschalidou2021NEURIPS,
  author = {Despoina Paschalidou and Amlan Kar and Maria Shugrina and Karsten Kreis and Andreas Geiger and Sanja Fidler},
  title = {ATISS: Autoregressive Transformers for Indoor Scene Synthesis},
  booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},
  year = {2021}
}

Installation & Dependencies

Our codebase has the following dependencies:

For the visualizations, we use simple-3dviz, which is our easy-to-use library for visualizing 3D data using Python and ModernGL and matplotlib for the colormaps. Note that simple-3dviz provides a lightweight and easy-to-use scene viewer using wxpython. If you wish you use our scripts for visualizing the generated scenes, you will need to also install wxpython. Note that for all the renderings in the paper we used NVIDIA's OMNIVERSE.

The simplest way to make sure that you have all dependencies in place is to use conda. You can create a conda environment called atiss using

conda env create -f environment.yaml
conda activate atiss

Next compile the extension modules. You can do this via

python setup.py build_ext --inplace
pip install -e .

Dataset

To evaluate a pretrained model or train a new model from scratch, you need to obtain the 3D-FRONT and the 3D-FUTURE dataset. To download both datasets, please refer to the instructions provided in the dataset's webpage. As soon as you have downloaded the 3D-FRONT and the 3D-FUTURE dataset, you are ready to start the preprocessing. In addition to a preprocessing script (preprocess_data.py), we also provide a very useful script for visualising 3D-FRONT scenes (render_threedfront_scene.py), which you can easily execute by running

python render_threedfront_scene.py SCENE_ID path_to_output_dir path_to_3d_front_dataset_dir path_to_3d_future_dataset_dir path_to_3d_future_model_info path_to_floor_plan_texture_images

You can also visualize the walls, the windows as well as objects with textures by setting the corresponding arguments. Apart from only visualizing the scene with scene id SCENE_ID, the render_threedfront_scene.py script also generates a subfolder in the output folder, specified via the path_to_output_dir argument that contains the .obj files as well as the textures of all objects in this scene. Note that examples of the expected scene ids SCENE_ID can be found in the train/test/val split files for the various rooms in the config folder, e.g. MasterBedroom-28057, LivingDiningRoom-4125 etc.

Data Preprocessing

Once you have downloaded the 3D-FRONT and 3D-FUTURE datasets you need to run the preprocess_data.py script in order to prepare the data to be able to train your own models or generate new scenes using previously trained models. To run the preprocessing script simply run

python preprocess_data.py path_to_output_dir path_to_3d_front_dataset_dir path_to_3d_future_dataset_dir path_to_3d_future_model_info path_to_floor_plan_texture_images --dataset_filtering threed_front_bedroom

Note that you can choose the filtering for the different room types (e.g. bedrooms, living rooms, dining rooms, libraries) via the dataset_filtering argument. The path_to_floor_plan_texture_images is the path to a folder containing different floor plan textures that are necessary to render the rooms using a top-down orthographic projection. An example of such a folder can be found in the demo\floor_plan_texture_images folder.

This script starts by parsing all scenes from the 3D-FRONT dataset and then for each scene it generates a subfolder inside the path_to_output_dir that contains the information for all objects in the scene (boxes.npz), the room mask (room_mask.png) and the scene rendered using a top-down orthographic_projection (rendered_scene_256.png). Note that for the case of the living rooms and dining rooms you also need to change the size of the room during rendering to 6.2m from 3.1m, which is the default value, via the --room_side argument.

Morover, you will notice that the preprocess_data.py script takes a significant amount of time to parse all 3D-FRONT scenes. To reduce the waiting time, we cache the parsed scenes and save them to the /tmp/threed_front.pkl file. Therefore, once you parse the 3D-FRONT scenes once you can provide this path in the environment variable PATH_TO_SCENES for the next time you run this script as follows:

PATH_TO_SCENES="/tmp/threed_front.pkl" python preprocess_data.py path_to_output_dir path_to_3d_front_dataset_dir path_to_3d_future_dataset_dir path_to_3d_future_model_info path_to_floor_plan_texture_images --dataset_filtering room_type

Finally, to further reduce the pre-processing time, note that it is possible to run this script in multiple threads, as it automatically checks whether a scene has been preprocessed and if it is it moves forward to the next scene.

How to pickle the 3D-FUTURE dataset

Most of our scripts require to provide a path to a file that contains the parsed ThreedFutureDataset after being pickled. To do this, we provide the pickle_threed_future_dataset.py that does this automatically for you. You can simply run this script as follows:

python pickle_threed_future_dataset.py path_to_output_dir path_to_3d_front_dataset_dir path_to_3d_future_dataset_dir path_to_3d_future_model_info --dataset_filtering room_type

Note that by specifying the PATH_TO_SCENES environment variable this script will run significantly faster. Moreover, this step is necessary for all room types containing different objects. For the case of 3D-FRONT this is for the bedrooms and the living/dining rooms, thus you have to run this script twice with different --dataset_filtering options. Please check the help menu for additional details.

Usage

As soon as you have installed all dependencies and have generated the preprocessed data, you can now start training new models from scratch, evaluate our pre-trained models and visualize the generated scenes using one of our pre-trained models. All scripts expect a path to a config file. In the config folder you can find the configuration files for the different room types. Make sure to change the dataset_directory argument to the path where you saved the preprocessed data from before.

Scene Generation

To generate rooms using a previously trained model, we provide the generate_scenes.py script and you can execute it by running

python generate_scenes.py path_to_config_yaml path_to_output_dir path_to_3d_future_pickled_data path_to_floor_plan_texture_images --weight_file path_to_weight_file

where the argument --weight_file specifies the path to a trained model and the argument path_to_config_yaml defines the path to the config file used to train that particular model. By default this script randomly selects floor plans from the test set and conditioned on this floor plan it generate different arrangements of objects. Note that if you want to generate a scene conditioned on a specific floor plan, you can select it by providing its scene id via the --scene_id argument. In case you want to run this script headlessly you should set the --without_screen argument. Finally, the path_to_3d_future_pickled_data specifies the path that contains the parsed ThreedFutureDataset after being pickled.

Scene Completion && Object Placement

To perform scene completion, we provide the scene_completion.py script that can be executed by running

python scene_completion.py path_to_config_yaml path_to_output_dir path_to_3d_future_pickled_data path_to_floor_plan_texture_images --weight_file path_to_weight_file

where the argument --weight_file specifies the path to a trained model and the argument path_to_config_yaml defines the path to the config file used to train that particular model. For this script make sure that the encoding type in the config file has also the word eval in it. By default this script randomly selects a room from the test set and conditioned on this partial scene it populates the empty space with objects. However, you can choose a specific room via the --scene_id argument. This script can be also used to perform object placement. Namely starting from a partial scene add an object of a specific object category.

In the output directory, the scene_completion.py script generates two folders for each completion, one that contains the mesh files of the initial partial scene and another one that contains the mesh files of the completed scene.

Object Suggestions

We also provide a script that performs object suggestions based on a user-specified region of acceptable positions. Similar to the previous scripts you can execute by running

python object_suggestion.py path_to_config_yaml path_to_output_dir path_to_3d_future_pickled_data path_to_floor_plan_texture_images --weight_file path_to_weight_file

where the argument --weight_file specifies the path to a trained model and the argument path_to_config_yaml defines the path to the config file used to train that particular model. Also for this script, please make sure that the encoding type in the config file has also the word eval in it. By default this script randomly selects a room from the test set and the user can either choose to remove some objects or keep it unchanged. Subsequently, the user needs to specify the acceptable positions to place an object using 6 comma seperated numbers that define the bounding box of the valid positions. Similar to the previous scripts, it is possible to select a particular scene by choosing specific room via the --scene_id argument.

In the output directory, the object_suggestion.py script generates two folders in each run, one that contains the mesh files of the initial scene and another one that contains the mesh files of the completed scene with the suggested object.

Failure Cases Detection and Correction

We also provide a script that performs failure cases correction on a scene that contains a problematic object. You can simply execute it by running

python failure_correction.py path_to_config_yaml path_to_output_dir path_to_3d_future_pickled_data path_to_floor_plan_texture_images --weight_file path_to_weight_file

where the argument --weight_file specifies the path to a trained model and the argument path_to_config_yaml defines the path to the config file used to train that particular model. Also for this script, please make sure that the encoding type in the config file has also the word eval in it. By default this script randomly selects a room from the test set and the user needs to select an object inside the room that will be located in an unnatural position. Given the scene with the unnatural position, our model identifies the problematic object and repositions it in a more plausible position.

In the output directory, the falure_correction.py script generates two folders in each run, one that contains the mesh files of the initial scene with the problematic object and another one that contains the mesh files of the new scene.

Training

Finally, to train a new network from scratch, we provide the train_network.py script. To execute this script, you need to specify the path to the configuration file you wish to use and the path to the output directory, where the trained models and the training statistics will be saved. Namely, to train a new model from scratch, you simply need to run

python train_network.py path_to_config_yaml path_to_output_dir

Note that it is also possible to start from a previously trained model by specifying the --weight_file argument, which should contain the path to a previously trained model.

Note that, if you want to use the RAdam optimizer during training, you will have to also install to download and install the corresponding code from this repository.

We also provide the option to log the experiment's evolution using Weights & Biases. To do that, you simply need to set the --with_wandb_logger argument and of course to have installed wandb in your conda environment.

Relevant Research

Please also check out the following papers that explore similar ideas:

  • Fast and Flexible Indoor Scene Synthesis via Deep Convolutional Generative Models pdf
  • Sceneformer: Indoor Scene Generation with Transformers pdf

More Repositories

1

GET3D

Python
4,208
star
2

lift-splat-shoot

Lift, Splat, Shoot: Encoding Images from Arbitrary Camera Rigs by Implicitly Unprojecting to 3D (ECCV 2020)
Python
986
star
3

GSCNN

Gated-Shape CNN for Semantic Segmentation (ICCV 2019)
Python
916
star
4

nglod

Neural Geometric Level of Detail: Real-time Rendering with Implicit 3D Shapes (CVPR 2021 Oral)
Python
857
star
5

LION

Latent Point Diffusion Models for 3D Shape Generation
Python
754
star
6

NKSR

[CVPR 2023 Highlight] Neural Kernel Surface Reconstruction
Python
751
star
7

ASE

Python
745
star
8

DIB-R

Learning to Predict 3D Objects with an Interpolation-based Differentiable Renderer (NeurIPS 2019)
Python
655
star
9

editGAN_release

Python
629
star
10

FlexiCubes

Python
588
star
11

STEAL

STEAL - Learning Semantic Boundaries from Noisy Annotations (CVPR 2019)
Jupyter Notebook
477
star
12

datasetGAN_release

Python
340
star
13

XCube

[CVPR 2024 Highlight] XCube: Large-Scale 3D Generative Modeling using Sparse Voxel Hierarchies
Python
240
star
14

vqad

225
star
15

vid2player3d

Official implementation for SIGGRAPH 2023 paper "Learning Physically Simulated Tennis Skills from Broadcast Videos"
Python
223
star
16

GameGAN_code

Learning to Simulate Dynamic Environments with GameGAN (CVPR 2020)
Python
222
star
17

CLD-SGM

Score-Based Generative Modeling with Critically-Damped Langevin Diffusion
Python
194
star
18

semanticGAN_code

Official repo for SemanticGAN https://nv-tlabs.github.io/semanticGAN/
Python
180
star
19

meta-sim

Meta-Sim: Learning to Generate Synthetic Datasets (ICCV 2019)
Python
171
star
20

DefTet

Learning Deformable Tetrahedral Meshes for 3D Reconstruction (NeurIPS 2020)
Cuda
169
star
21

PADL

105
star
22

STRIVE

Code for CVPR 2022 paper "Generating Useful Accident-Prone Driving Scenarios via a Learned Traffic Prior"
Python
104
star
23

DriveGAN_code

Code release for DriveGAN (CVPR 2021)
CSS
93
star
24

3DiffTection

92
star
25

GENIE

GENIE: Higher-Order Denoising Diffusion Solvers
Python
88
star
26

bigdatasetgan_code

project page: https://nv-tlabs.github.io/big-datasetgan/
Python
87
star
27

stmc

Implementation of "Multi-Track Timeline Control for Text-Driven 3D Human Motion Generation" from CVPR Workshop on Human Motion Generation 2024.
Python
77
star
28

DPDM

Differentially Private Diffusion Models
Python
76
star
29

AUV-NET

Python
75
star
30

DIB-R-Single-Image-3D-Reconstruction

Python
73
star
31

trace

Official implementation of TRACE, the TRAjectory Diffusion Model for Controllable PEdestrians, from the CVPR 2023 paper: "Trace and Pace: Controllable Pedestrian Animation via Guided Trajectory Diffusion".
Python
68
star
32

pacer

Official implementation of PACER, Pedestrian Animation ControllER, of CVPR 2023 paper: "Trace and Pace: Controllable Pedestrian Animation via Guided Trajectory Diffusion".
Python
57
star
33

planning-centric-metrics

Learning to Evaluate Perception Models Using Planner-Centric Metrics
Python
52
star
34

DiffusionTexturePainting

[SIGGRAPH 2024] Diffusion Texture Painting
Python
51
star
35

editGAN

43
star
36

meta-sim-structure

Meta-Sim2: Unsupervised Learning of Scene Structure for Synthetic Data Generation (ECCV 2020)
31
star
37

GANverse3D

27
star
38

gameGAN

Project page for GameGAN
CSS
26
star
39

VideoLDM

HTML
24
star
40

brushstroke_engine

Code accompanying Neural Brushstroke Engine paper, SIGGRAPH Asia 2022
Jupyter Notebook
23
star
41

3DStyleNet

18
star
42

nv-tlabs.github.io

NVIDIA Toronto AI Lab public website
HTML
16
star
43

fDAL

Python
14
star
44

MvDeCor

Python
13
star
45

semanticGAN

https://nv-tlabs.github.io/semanticGAN/
13
star
46

compact-ngp

13
star
47

fed-sim

Federated Simulation for Medical Imaging (MICCAI2020)
11
star
48

DP-Sinkhorn_code

Python
11
star
49

DMTet

HTML
10
star
50

big-datasetgan

https://nv-tlabs.github.io/big-datasetgan/
HTML
9
star
51

datasetGAN

8
star
52

fegr

HTML
8
star
53

NTG

NTG - Neural Turtle Graphics for Modeling City Road Layouts (ICCV 2019)
8
star
54

inverse-rendering-3d-lighting

Project page for "Learning Indoor Inverse Rendering with 3D Spatially-Varying Lighting" (ICCV 2021)
7
star
55

flexicubes_website

5
star
56

tesmo

Official implementation of TeSMo, a method for text-controlled scene-aware motion generation, from the ECCV 2024 paper: "Generating Human Interaction Motions in Scenes with Text Control".
5
star
57

nkf

Project page of Neural Fields as Learnable Kernels for 3D Reconstruction.
HTML
4
star
58

XDGAN

XDGAN: Multi-Modal 3D Shape Generation in 2D Space
HTML
4
star
59

DriveGAN

CSS
3
star
60

physics-pose-estimation-project-page

HTML
3
star
61

outdoor-ar

HTML
3
star
62

hipnet

CSS
3
star
63

simulation-strategies

Towards Optimal Strategies for Training Self-Driving Perception Models in Simulation
2
star
64

equivariant

CSS
2
star
65

estimatingrequirements

Project page for the paper "How Much More Data Do I Need? Estimating Requirements For Downstream Tasks".
HTML
2
star
66

adaptive-shells-website

HTML
2
star
67

LearnOptimizeCollect

Project page for the paper "Optimizing Data Collection In Machine Learning"
HTML
1
star
68

DP-Sinkhorn

Project page for DP-Sinkhorn (Neurips 2021)
HTML
1
star
69

PMGAN

CSS
1
star
70

hugo-backend

hugo backend for the main page
Shell
1
star
71

lip-mlp

HTML
1
star
72

unicon

HTML
1
star
73

DIBRPlus

1
star