• Stars
    star
    127
  • Rank 274,568 (Top 6 %)
  • Language
    Python
  • License
    BSD 3-Clause "New...
  • Created over 4 years ago
  • Updated almost 2 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

[AAAI'20] LCD: Learned Cross-Domain Descriptors for 2D-3D Matching

LCD: Learned Cross-domain Descriptors for 2D-3D Matching

This is the official PyTorch implementation of the following publication:

LCD: Learned Cross-domain Descriptors for 2D-3D Matching
Quang-Hieu Pham, Mikaela Angelina Uy, Binh-Son Hua, Duc Thanh Nguyen, Gemma Roig, Sai-Kit Yeung
AAAI Conference on Artificial Intelligence, 2020 (Oral)
Paper | Homepage | Video

2D-3D Match Dataset

Download http://hkust-vgd.ust.hk/2d3dmatch/

We collect a new dataset of 2D-3D correspondences by leveraging the availability of several 3D datasets from RGB-D scans. Specifically, we use the data from SceneNN and 3DMatch. Our training dataset consists of 110 RGB-D scans, of which 56 scenes are from SceneNN and 54 scenes are from 3DMatch. The 2D-3D correspondence data is generated as follows. Given a 3D point which is randomly sampled from a 3D point cloud, we extract a set of 3D patches from different scanning views. To find a 2D-3D correspondence, for each 3D patch, we re-project its 3D position into all RGB-D frames for which the point lies in the camera frustum, taking occlusion into account. We then extract the corresponding local 2D patches around the re-projected point. In total, we collected around 1.4 millions 2D-3D correspondences.

Usage

Prerequisites

Required PyTorch 1.2 or newer. Some other dependencies are:

Pre-trained models

We released three pre-trained LCD models with different descriptor size: LCD-D256, LCD-D128, and LCD-D64. All of the models can be found in the logs folder.

Training

After downloading our dataset, put all of the hdf5 files into the data folder.

To train a model on the 2D-3D Match dataset, use the following command:

$ python train.py --config config.json --logdir logs/LCD

Log files and network parameters will be saved to the logs/LCD folder.

Applications

Aligning two point clouds with LCD

This demo aligns two 3D colored point clouds using our pre-trained LCD descriptor with RANSAC. How to run:

$ python -m apps.align_point_cloud samples/000.ply samples/002.ply --logdir logs/LCD-D256/

For more information, use the --help option.

After aligning two input point clouds, the final registration result will be shown. For example:

Aligned point clouds

Note: This demo requires Open3D installed.

Prepare your own dataset

We provide two scripts that we found useful during our data processing. Please take a look and adopt it to your need.

  • scripts/sample_train.py: Sample 2D-3D correspondences from the 3DMatch dataset
  • scripts/convert_valtest.py: Convert the val-set.mat and test-set.mat files from 3DMatch into HDF5 format.

Citation

If you find our work useful for your research, please consider citing:

@inproceedings{pham2020lcd,
  title = {{LCD}: {L}earned cross-domain descriptors for 2{D}-3{D} matching},
  author = {Pham, Quang-Hieu and Uy, Mikaela Angelina and Hua, Binh-Son and Nguyen, Duc Thanh and Roig, Gemma and Yeung, Sai-Kit},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year = 2020
}

If you use our dataset, please cite the following papers:

@inproceedings{hua2016scenenn,
  title = {{SceneNN}: {A} scene meshes dataset with a{NN}otations},
  author = {Hua, Binh-Son, and Pham, Quang-Hieu and Nguyen, Duc Thanh and Tran, Minh-Khoi and Yu, Lap-Fai and Yeung, Sai-Kit},
  booktitle = {International Conference on 3D Vision},
  year = 2016
}

@inproceedings{zeng20173dmatch,
  title = {{3DMatch}: {L}earning local geometric descriptors from {RGB}-{D} reconstructions},
  author= {Zeng, Andy and Song, Shuran and Nie{\ss}ner, Matthias and Fisher, Matthew and Xiao, Jianxiong and Funkhouser, Thomas},
  booktitle = {IEEE Conference on Computer Vision and Pattern Recognition},
  year = 2017
}

License

Our code is released under BSD 3-Clause license (see LICENSE for more details).

Our dataset is released under CC BY-NC-SA 4.0 license.

Contact: Quang-Hieu Pham ([email protected])

More Repositories

1

scanobjectnn

Code for Revisiting Point Cloud Classification: A New Benchmark Dataset and Classification Model on Real-World Data, ICCV 2019
Python
230
star
2

pointwise

Code for Pointwise Convolutional Neural Networks, CVPR 2018
Python
128
star
3

shellnet

ShellNet: Efficient Point Cloud Convolutional Neural Networks using Concentric Shells Statistics
Python
83
star
4

scenenn

Supplemental code and scripts for the paper SceneNN: A Scene Meshes Dataset with aNNotations
C++
72
star
5

riconv

Rotation Invariant Convolutions for 3D Point Clouds Deep Learning
Python
54
star
6

sese

A 3D scene mesh annotation tool
49
star
7

architectural_style_transfer

Code and data release for ICCP 2022 paper "Time-of-Day Neural Style Transfer for Architectural Photographs".
Python
35
star
8

shrec17

Supplementary code for SHREC 2017 RGB-D Object-to-CAD Retrieval track
Python
29
star
9

MarineGPT

The official implementation of MarineGPT
Python
24
star
10

RFNet-4D

Code release for ECCV 2022 paper "RFNet-4D: Joint Object Reconstruction and Flow Estimation from 4D Point Clouds"
Python
22
star
11

ElasticReconstruction

Code for RGBD reconstruction. Modified from the code from http://redwood-data.org/indoor/pipeline.html
C++
18
star
12

neural_scene_decoration

Code release for ECCV 2022 paper "Neural Scene Decoration from a Single Photograph"
Python
8
star
13

Salt-Video

The labeling tool for dense video object segmentation
Python
5
star
14

minimal_adversarial_pcd

Minimal Adversarial Examples for Deep Learning on 3D Point Clouds (ICCV 2021)
Python
4
star
15

nerfstyle

Source code for the paper "Locally Stylized Neural Radiance Fields"
Python
4
star
16

Marine_GPT-4V_Eval

The official repository of "Exploring Boundary of GPT-4V on Marine Analysis: A Preliminary Case Study".
4
star
17

shrec18

Supplementary code for SHREC 2018 RGB-D Object-to-CAD Retrieval track
Python
2
star
18

hkust-vgd.github.io

JavaScript
2
star
19

CoralSCOP

The official repository of "CoralSCOP: Segment any COral Image on this Planet". CVPR 2024
HTML
2
star
20

TTA

JavaScript
1
star
21

marinevideokit

1
star