• Stars
    star
    170
  • Rank 223,357 (Top 5 %)
  • Language
    Python
  • Created over 1 year ago
  • Updated over 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

🖨️[arXiv'23] Official PyTorch Implementation of MatchNeRF
sculpture printer





MatchNeRF

Official PyTorch implementation for MatchNeRF, a new generalizable NeRF approach that employs explicit correspondence matching as the geometry prior and can perform novel view synthesis on unseen scenarios with as few as two source views as input, without requiring any retraining and fine-tuning.

Explicit Correspondence Matching for Generalizable Neural Radiance Fields
Yuedong Chen1, Haofei Xu2, Qianyi Wu1, Chuanxia Zheng3, Tat-Jen Cham4, Jianfei Cai1
1Monash University, 2ETH Zurich, 3University of Oxford, 4Nanyang Technological University
arXiv 2023

Paper | Project Page | Code

Recent Updates
  • 25-Apr-2023: released MatchNeRF codes and models.


Table of Contents

Setup Environment

This project is developed and tested on a CUDA11 device. For other CUDA version, manually update the requirements.txt file to match the settings before preceding.

git clone --recursive https://github.com/donydchen/matchnerf.git
cd matchnerf
conda create --name matchnerf python=3.8
conda activate matchnerf
pip install -r requirements.txt

For rendering video output, it requires ffmpeg to be installed on the system, you can double check by running ffmpeg -version. If ffmpeg does not exist, consider installing it by running conda install ffmpeg.

Download Datasets

DTU (for both training and testing)

  • Download the preprocessed DTU training data dtu_training.rar and Depth_raw.zip from original MVSNet repo.

  • Extract 'Cameras/' and 'Rectified/' from the above downloaded 'dtu_training.rar', and extract 'Depths' from the 'Depth_raw.zip'. Link all three folders to data/DTU, which should then have the following structure

data/DTU/
    |__ Cameras/
    |__ Depths/
    |__ Rectified/

Blender (for testing only)

Real Forward Facing (for testing only)

Testing

MVSNeRF Setting (3 Nearest Views)

Download the pretrained model matchnerf_3v.pth and save to configs/pretrained_models/matchnerf_3v.pth, then run

python test.py --yaml=test --name=matchnerf_3v

If encounters CUDA out-of-memory, please reduce the ray sampling number, e.g., append --nerf.rand_rays_test==4096 to the command.

Performance should be exactly the same as below,

Dataset PSNR SSIM LPIPS
DTU 26.91 0.934 0.159
Real Forward Facing 22.43 0.805 0.244
Blender 23.20 0.897 0.164

Training

Download the GMFlow pretrained weight (gmflow_sintel-0c07dcb3.pth) from the original GMFlow repo, and save it to configs/pretrained_models/gmflow_sintel-0c07dcb3.pth, then run

python train.py --yaml=train

Rendering Video

python test.py --yaml=test_video --name=matchnerf_3v_video

Results (without any per-scene fine-tuning) should be similar as below,

Visual Results

dtu_scan38_view24
DTU: scan38_view24

blender_materials_view36
Blender: materials_view36

llff_leaves_view13
Real Forward Facing: leaves_view13

Use Your Own Data

  • Download the model (matchnerf_3v_ibr.pth) pretrained with IBRNet data (follow 'GPNR Setting 1'), and save it to configs/pretrained_models/matchnerf_3v_ibr.pth.
  • Following the instructions detailed in the LLFF repo, use img2poses.py to recover camera poses.
  • Update the colmap data loader at datasets/colmap.py accordingly.

We provide the following 3 input views demo for your reference.

# lower resolution but fast
python test.py --yaml=demo_own
# full version
python test.py --yaml=test_video_own

The generated video will look like,

colmap_printer
Demo: own data, printer

Miscellaneous

Citation

If you use this project for your research, please cite our paper.

@article{chen2023matchnerf,
    title={Explicit Correspondence Matching for Generalizable Neural Radiance Fields},
    author={Chen, Yuedong and Xu, Haofei and Wu, Qianyi and Zheng, Chuanxia and Cham, Tat-Jen and Cai, Jianfei},
    journal={arXiv preprint arXiv:2304.12294},
    year={2023}
}

Pull Request

You are more than welcome to contribute to this project by sending a pull request.

Acknowledgments

This implementation borrowed many code snippets from GMFlow, MVSNeRF, BARF and GIRAFFE. Many thanks for all the above mentioned projects.

More Repositories

1

mvsplat

🌊 [ECCV'24] MVSplat: Efficient 3D Gaussian Splatting from Sparse Multi-View Images
Python
548
star
2

ganimation_replicate

An Out-of-the-Box Replication of GANimation using PyTorch, pretrained weights are available!
Python
216
star
3

sem2nerf

👩🏼‍🦰😺[ECCV'22] Official PyTorch Implementation of Sem2NeRF: Converting Single-View Semantic Masks to NeRFs
Python
123
star
4

FMPN-FER

😁[VCIP'19 Oral] Official PyTorch Implementation of Facial Motion Prior Networks for Facial Expression Recognition
Python
92
star
5

causal_emotion

☯︎[ACMMM'22] Official PyTorch Implementation of Towards Unbiased Visual Emotion Recognition via Causal Intervention
Python
15
star
6

ran_replicate

A PyTorch re-implementation of Weakly Supervised Facial Action Unit Recognition through Adversarial Training
Python
10
star
7

landmark-tool

A simple image landmark tool written in pyqt.
Python
7
star
8

Douban

一个展示豆瓣Top250电影详细信息以及最新影评的Win8 Metro小应用,使用sqlite数据库,实现了“搜索,动画,共享,网络,数据存取,磁贴,多线程”
C#
7
star
9

image-caption-cpp

A data driven query expansion approach for image caption, implemented in cpp
C++
4
star
10

Dragon-Front

The comment for A Complete Front End of the dragon book.
Java
3
star
11

Agenda

A simple cpp project for freshman in SS of SYSU.
C++
3
star
12

ExprEval

A calculator based on expression, using Eclipse Java.
Java
2
star
13

EAlbum

An Electronic Album running on PXA270
Assembly
1
star
14

multimedia

Homework Projects for the Course Multimedia Technology and Applications
Python
1
star
15

CS231n

assignment of CS231n
Jupyter Notebook
1
star
16

donydchen.github.io

Yuedong CHEN's homepage and project pages.
HTML
1
star