• Stars
    star
    144
  • Rank 255,590 (Top 6 %)
  • Language
    Python
  • License
    BSD 3-Clause "New...
  • Created over 2 years ago
  • Updated over 2 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Transformers as Meta-Learners for Implicit Neural Representations, in ECCV 2022

Trans-INR

This repository contains the official implementation for the following paper:

Transformers as Meta-Learners for Implicit Neural Representations
Yinbo Chen, Xiaolong Wang
ECCV 2022

Project page: https://yinboc.github.io/trans-inr/.

@inproceedings{chen2022transinr,
  title={Transformers as Meta-Learners for Implicit Neural Representations},
  author={Chen, Yinbo and Wang, Xiaolong},
  booktitle={European Conference on Computer Vision},
  year={2022},
}

Reproducing Experiments

Environment

  • Python 3
  • Pytorch 1.12.0
  • pyyaml numpy tqdm imageio TensorboardX wandb einops

Data

mkdir data and put different dataset folders in it.

  • CelebA: download (from kaggle), extract, and rename the folder as celeba (so that images are in data/celeba/img_align_celeba/img_align_celeba).

  • Imagenette: download, extract, and rename the folder as imagenette.

  • View synthesis: download from google drive (provided by learnit) and put them in a folder named learnit_shapenet, unzip the category folders and rename them as chairs, cars, lamps correspondingly.

Training

Run CUDA_VISIBLE_DEVICES=[GPU] python run_trainer.py --cfg [CONFIG], configs are in cfgs/.

To enable wandb, complete wandb.yaml (in root) and add -w to the training command.

When running multiple multi-gpu training processes, specify -p with different values (0,1,2...) for different ports.

Evaluation

For image reconstruction, test PSNR is automatically evaluated in the training script.

For view synthesis, run in a single GPU with configs in cfgs/nvs_eval. To enable test-time optimization, uncomment (remove #) tto_steps in configs.