• Stars
    star
    221
  • Rank 179,773 (Top 4 %)
  • Language
    Python
  • License
    MIT License
  • Created over 6 years ago
  • Updated almost 5 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Efficient Diffusion for Image Retrieval

This is a faster and improved version of diffusion retrieval, inspired by diffusion-retrieval.

Reference:

If you would like to understand further details of our method, these slides may provide some help.

Features

  • All random walk processes are moved to offline, making the online search remarkably fast

  • In contrast to previous works, we achieved better performance by applying late truncation instead of early truncation to the graph

Requirements

  • Install Facebook FAISS by running conda install faiss-cpu -c pytorch

Optional: install the faiss-gpu under the instruction according to your CUDA version

  • Install joblib by running conda install joblib

  • Install tqdm by running conda install tqdm

Parameters

All parameters can be modified in Makefile. You may want to edit DATASET and FEATURE_TYPE to test all combinations of each dataset and each feature type. Another parameter truncation_size is set to 1000 by default, for large datasets like Oxford105k and Paris106k, changing it to 5000 will improve the performance.

Run

  • Run make download to download files needed in experiments;

  • Run make mat2npy to convert .mat files to .npy files;

  • Run make rank to get the results. If you have GPUs, try using commands like CUDA_VISIBLE_DEVICES=0,1 make rank, 0,1 are examples of GPU ids.

Note: on Oxford5k and Paris6k datasets, the truncation_size parameter should be no larger than 1024 when using GPUs according to FAISS's limitation. You can use CPUs instead.

Updates!!

  • We changed the evaluation protocol to the official one. Our previous evaluation code had issues on computing the precision for the first true positive result, which causes the mAP slightly higher than its real value. Since all results in the paper were obtained by the previous evaluation, the comparison is still solid.
  • We provide a new retrieval method that uses all queries at once which produces better performance. If you want to use the algorithm described in the paper, please check search_old in rank.py.

Authors