• Stars
    star
    1,750
  • Rank 26,606 (Top 0.6 %)
  • Language
    Python
  • License
    Apache License 2.0
  • Created over 4 years ago
  • Updated over 2 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Atlas: End-to-End 3D Scene Reconstruction from Posed Images

ATLAS: End-to-End 3D Scene Reconstruction from Posed Images

Project Page | Paper | Video | Models | Sample Data

Zak Murez, Tarrence van As, James Bartolozzi, Ayan Sinha, Vijay Badrinarayanan, and Andrew Rabinovich

Quickstart

We provide a Colab Notebook to try inference.

Installation

We provide a docker image Docker/Dockerfile with all the dependencies.

Or you can install them yourself:

conda install -y pytorch=1.5.0 torchvision=0.6.0 cudatoolkit=10.2 -c pytorch
conda install opencv
pip install \
  open3d>=0.10.0.0 \
  trimesh>=3.7.6 \
  pyquaternion>=0.9.5 \
  pytorch-lightning>=0.8.5 \
  pyrender>=0.1.43
python -m pip install detectron2 -f https://dl.fbaipublicfiles.com/detectron2/wheels/cu102/torch1.5/index.html

For 16bit mixed precision (default training setting) you will also need NVIDIA apex

git clone https://github.com/NVIDIA/apex
pip install -v --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" ./apex

For headless rendering with pyrender (used for evaluation) see installation instructions here.

For inference with COLMAP see installation instructions here.

(If you have problems running the code try using the exact versions specified... for example the pytorch-lightning API has not settled yet).

Data Preperation

Sample

We provide a small sample scene for easy download and rapid inference. Download and extract the data to DATAROOT. The directory structure should look like:

DATAROOT
โ””โ”€โ”€โ”€sample
โ”‚   โ””โ”€โ”€โ”€sample1
โ”‚       โ”‚   intrinsics.txt
โ”‚       โ””โ”€โ”€โ”€color
โ”‚       โ”‚   โ”‚   00000001.jpg
โ”‚       โ”‚   โ”‚   00000002.jpg
โ”‚       โ”‚   โ”‚   ...
โ”‚       โ””โ”€โ”€โ”€pose
โ”‚           โ”‚   00000001.txt
โ”‚           โ”‚   00000002.txt
โ”‚           โ”‚   ...

Next run our data preperation script which parses the raw data format into our common json format (more info here) (note that we store our derivered data in a seperate folder METAROOT to prevent pollution of the original data).

python prepare_data.py --path DATAROOT --path_meta METAROOT --dataset sample

Scannet

Download and extract Scannet by following the instructions provided at http://www.scan-net.org/. You also need to download the train/val/test splits and the label mapping from https://github.com/ScanNet/ScanNet (Benchmark Tasks). The directory structure should look like:

DATAROOT
โ””โ”€โ”€โ”€scannet
โ”‚   โ””โ”€โ”€โ”€scans
โ”‚   |   โ””โ”€โ”€โ”€scene0000_00
โ”‚   |       โ””โ”€โ”€โ”€color
โ”‚   |       โ”‚   โ”‚   0.jpg
โ”‚   |       โ”‚   โ”‚   1.jpg
โ”‚   |       โ”‚   โ”‚   ...
โ”‚   |       โ”‚   ...
โ”‚   โ””โ”€โ”€โ”€scans_test
โ”‚   |       โ””โ”€โ”€โ”€color
โ”‚   |       โ”‚   โ”‚   0.jpg
โ”‚   |       โ”‚   โ”‚   1.jpg
โ”‚   |       โ”‚   โ”‚   ...
โ”‚   |       โ”‚   ...
|   โ””โ”€โ”€โ”€scannetv2-labels.combined.tsv
|   โ””โ”€โ”€โ”€scannetv2_test.txt
|   โ””โ”€โ”€โ”€scannetv2_train.txt
|   โ””โ”€โ”€โ”€scannetv2_val.txt

Next run our data preperation script which parses the raw data format into our common json format (more info here) (note that we store our derivered data in a seperate folder METAROOT to prevent pollution of the original data). This script also generates the ground truth TSDFs using TSDF Fusion.

python prepare_data.py --path DATAROOT --path_meta METAROOT --dataset scannet

This will take a while (a couple hours on 8 Quadro RTX 6000's)... if you have multiple gpus you can use the --i and --n flags to run in parallel

python prepare_data.py --path DATAROOT --path_meta METAROOT --dataset scannet --i 0 --n 4 &
python prepare_data.py --path DATAROOT --path_meta METAROOT --dataset scannet --i 1 --n 4 &
python prepare_data.py --path DATAROOT --path_meta METAROOT --dataset scannet --i 2 --n 4 &
python prepare_data.py --path DATAROOT --path_meta METAROOT --dataset scannet --i 3 --n 4 &

Note that if you do not plan to train you can prepare just the test set using the --test flag.

Your own data

To use your own data you will need to put it in the same format as the sample data, or implement your own version of something like sample.py. After that you can modify prepare_data.py to also prepare your data. Note that the pretrained models are trained with Z-up metric coordinates and do not generalize to other coordinates (this means that the scale and 2 axes of the orientation ambiguity of SFM must be resolved prior to using the poses).

Inference

Once you have downloaded and prepared the data (as described above) you can run inference using our pretrained model (download) or by training your own (see below).

To run on the sample scene use:

python inference.py --model results/release/semseg/final.ckpt --scenes METAROOT/sample/sample1/info.json

If your GPU does not have enough memory you can reduce voxel_dim (at the cost of possible clipping the scene)

python inference.py --model results/release/semseg/final.ckpt --scenes METAROOT/sample/sample1/info.json --voxel_dim 208 208 80

Note that the values of voxel_dim must be divisible by 8 using the default 3D network.

Results will be saved to:

results/release/semseg/test_final/sample1.ply // mesh
results/release/semseg/test_final/sample1.npz // tsdf
results/release/semseg/test_final/sample1_attributes.npz // vertex semseg

To run on the entire Scannet test set use:

python inference.py --model results/release/semseg/final.ckpt

Evaluation

After running inference on Scannet you can run evaluation using:

python evaluate.py --model results/release/semseg/test_final/

Note that evaluate.py uses pyrender to render depth maps from the predicted mesh for 2D evaluation. If you are using headless rendering you must also set the enviroment variable PYOPENGL_PLATFORM=osmesa (see pyrender for more details).

You can print the results of a previous evaluation run using

python visualize_metrics.py --model results/release/semseg/test_final/

Training

In addition to downloadinng and prepareing the data (as described above) you will also need to download our pretrained resnet50 weights (ported from detectron2) and unnzip it.

Then you can train your own models using train.py.

Configuration is controlled via a mix of config.yaml files and command line arguments. We provide a few sample config files used in the paper in configs/. Experiment names are specified by TRAINER.NAME and TRAINER.VERSION, which default to atlas and default. See config.py for a full list of parameters.

python train.py --config configs/base.yaml TRAINER.NAME atlas TRAINER.VERSION base
python train.py --config configs/semseg.yaml TRAINER.NAME atlas TRAINER.VERSION semseg

To watch training progress use

tensorboard --logdir results/

COLMAP Baseline

We also provide scripts to run inference and evaluataion using COLMAP. Note that you must install COLMAP (which is included in our docker image).

For inference on the sample scene use

python inference_colmap.py --pathout results/colmap --scenes METAROOT/sample/sample1/info.json

and for Scannet

python inference_colmap.py --pathout results/colmap

To evaluate Scannet use

python evaluate_colmap.py --pathout results/colmap

Citation

@inproceedings{murez2020atlas,
  title={Atlas: End-to-End 3D Scene Reconstruction from Posed Images},
  author={Zak Murez and 
          Tarrence van As and 
          James Bartolozzi and 
          Ayan Sinha and 
          Vijay Badrinarayanan and 
          Andrew Rabinovich},
  booktitle = {ECCV},
  year      = {2020},
  url       = {https://arxiv.org/abs/2003.10432}
}

More Repositories

1

SuperGluePretrainedNetwork

SuperGlue: Learning Feature Matching with Graph Neural Networks (CVPR 2020, Oral)
Python
3,056
star
2

SuperPointPretrainedNetwork

PyTorch pre-trained model for real-time interest point detection, description, and sparse tracking (https://arxiv.org/abs/1712.07629)
Python
1,713
star
3

DELTAS

Inference Code for DELTAS: Depth Estimation by Learning Triangulation And densification of Sparse point (ECCV 2020)s
Python
95
star
4

prismatic

Prismatic is a declarative JS library for creating 3D content for the Helio browser.
JavaScript
38
star
5

Magic-Leap-Toolkit-Unity

C#
34
star
6

MRTK-MagicLeapOne

An extension to provide compatibility with Magic Leap features such as hand tracking and 6dof controller support, to Microsoft's Mixed reality Toolkit (MRTK).
C#
31
star
7

xr-kit-samples-unity

Magicverse SDK sample project
C#
26
star
8

UnityTemplate

C#
21
star
9

MagicLeapUnityExamples

This project contains Examples for the Magic Leap Unity SDK and has been configured so developers can start developing for the Magic Leap platform quickly.
C#
17
star
10

LeapBrush

Magic Leap 2's AR Cloud reference application for Unity that lets you draw in AR with other ML2 devices.
C#
15
star
11

MagicLeapUnitySDK

Magic Leap Unity Developer SDK
C#
12
star
12

arcloud

AR Cloud from Magic Leap allows for shared experiences using features such as mapping, localization, and spatial anchors.
Shell
12
star
13

detached_explainer

6
star
14

IconCreationPlugin

Python
6
star
15

Desktop-Companion-App-Developer-Tools

Desktop Companion App Developer Tools
C++
4
star
16

3DBrainVisualizer

C#
3
star
17

developer-portal-docs

Home to the documentation and API section of the developer portal
JavaScript
3
star
18

ml1-spectator-mode

A Unity project that shows how to record a spectator view on the Magic Leap 1 using two co-located Magic Leap Headsets.
C#
3
star
19

SpatialAnchorsExample

Unity App that uses Magic Leap 2โ€™s Spatial Anchors API and a JSON file to create content that persists in a Space across reboots. If the user is not localized, they the app allows users to localize using a QR Code.
ShaderLab
2
star
20

kernel-lumin

C
2
star
21

c_api_samples

Lumin SDK CAPI Samples
C++
2
star
22

tfmodules

Repository for Terraform modules compatible with RenovateBot
Go
2
star
23

MagicLeapXRKeyboard

A keyboard that can be used in any project that supports Unity's XR Interaction Toolkit.
C#
2
star
24

wifi-direct-shared-experience-sample

Shared Experience sample app that uses a Wi-Fi Direct Service Discovery Android native Unity plug-in.
C#
2
star
25

c3

Python
1
star
26

ML1MarkerAndImageTrackingExample

Tutorials and demo projects for tracking images and ArUco markers with the Magic Leap
C#
1
star
27

wifi-direct-plugin-sample

Sample Android Plugin for Unity to use Wi-Fi Direct Service Discovery. This project is an Android app harness written in Java and the plugin is an Android Activity contained in a Java Module.
Java
1
star
28

com.magicleap.spectator.networkanchors

A lightweight package for the Magic Leap 1 that makes creating colocation experiences easier using a shared origin.
C#
1
star
29

MagicLeapReadyPlayerMe

Package containing scripts and samples to use Ready Player Me avatars with the Magic Leap 2.
C#
1
star
30

MagicLeapPhotonFusionExample

This repository contains an example project demonstrating how to use Photon Fusion to create a colocation application for the Magic Leap 2. This project provides a simple multiuser and colocation application using Photon Fusion. The example is designed to work with the Magic Leap 2 and demonstrates the basics of creating a shared AR experience.
C#
1
star