Tri-MipRF
Official PyTorch implementation (coming soon) for the paper:
Tri-MipRF: Tri-Mip Representation for Efficient Anti-Aliasing Neural Radiance Fields
ICCV 2023
Wenbo Hu, Yuling Wang, Lin Ma, Bangbang Yang, Lin Gao, Xiao Liu, Yuewen Ma
lego.mp4
Instant-ngp (left) suffers from aliasing in distant or low-resolution views and blurriness in close-up shots, while Tri-MipRF (right) renders both fine-grained details in close-ups and high-fidelity zoomed-out images.
To render a pixel, we emit a cone from the cameraโs projection center to the pixel on the image plane, and then we cast a set of spheres inside the cone. Next, the spheres are orthogonally projected on the three planes and featurized by our Tri-Mip encoding. After that the feature vector is fed into the tiny MLP to non-linearly map to density and color. Finally, the density and color of the spheres are integrated using volume rendering to produce final color for the pixel.
Our Tri-MipRF achieves state-of-the-art rendering quality while can be reconstructed efficiently, compared with cutting-edge radiance fields methods, e.g., NeRF, MipNeRF, Plenoxels, TensoRF, and Instant-ngp. Equipping Instant-ngp with super-sampling (named Instant-ngpโ5ร) improves the rendering quality to a certain extent but significantly slows down the reconstruction.
TODO
- Release source code.
Citation
If you find the code useful for your work, please star this repo and consider citing:
@inproceedings{hu2023Tri-MipRF,
author = {Hu, Wenbo and Wang, Yuling and Ma, Lin and Yang, Bangbang and Gao, Lin and Liu, Xiao and Ma, Yuewen},
title = {Tri-MipRF: Tri-Mip Representation for Efficient Anti-Aliasing Neural Radiance Fields},
booktitle = {ICCV},
year = {2023}
}