• Stars
    star
    217
  • Rank 182,446 (Top 4 %)
  • Language
    Python
  • Created about 3 years ago
  • Updated over 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

[CVPR 2022] Deep Spectral Methods: A Surprisingly Strong Baseline for Unsupervised Semantic Segmentation and Localization

Deep Spectral Methods for Unsupervised Localization and Segmentation (CVPR 2022 - Oral)

Project Demo Conference Paper

Description

This code accompanies the paper Deep Spectral Methods: A Surprisingly Strong Baseline for Unsupervised Semantic Segmentation and Localization.

Abstract

Unsupervised localization and segmentation are long-standing computer vision challenges that involve decomposing an image into semantically-meaningful segments without any labeled data. These tasks are particularly interesting in an unsupervised setting due to the difficulty and cost of obtaining dense image annotations, but existing unsupervised approaches struggle with complex scenes containing multiple objects. Differently from existing methods, which are purely based on deep learning, we take inspiration from traditional spectral segmentation methods by reframing image decomposition as a graph partitioning problem. Specifically, we examine the eigenvectors of the Laplacian of a feature affinity matrix from self-supervised networks. We find that these eigenvectors already decompose an image into meaningful segments, and can be readily used to localize objects in a scene. Furthermore, by clustering the features associated with these segments across a dataset, we can obtain well-delineated, nameable regions, i.e. semantic segmentations. Experiments on complex datasets (Pascal VOC, MS-COCO) demonstrate that our simple spectral method outperforms the state-of-the-art in unsupervised localization and segmentation by a significant margin. Furthermore, our method can be readily used for a variety of complex image editing tasks, such as background removal and compositing.

Demo

Please check out our interactive demo on Huggingface Spaces! The demo enables you to upload an image and outputs the eigenvectors extracted by our method. It does not perform the downstream tasks in our paper (e.g. semantic segmentation), but it should give you some intuition for how you might use utilize our method for your own research/use-case.

Examples

Examples

How to run

Dependencies

The minimal set of dependencies is listed in requirements.txt.

Data Preparation

The data preparation process simply consists of collecting your images into a single folder. Here, we describe the process for Pascal VOC 2012. Pascal VOC 2007 and MS-COCO are similar.

Download the images into a single folder. Then create a text file where each line contains the name of an image file. For example, here is our initial data layout:

data
└── VOC2012
 Β Β  β”œβ”€β”€ images
    β”‚   └── {image_id}.jpg
    └── lists
        └── images.txt

Extraction

We first extract features from images and stores these into files. We then extract eigenvectors from these features. Once we have the eigenvectors, we can perform downstream tasks such as object segmentation and object localization.

The primary script for this extraction process is extract.py in the extract/ directory. All functions in extract.py have helpful docstrings with example usage.

Step 1: Feature Extraction

First, we extract features from our images and save them to .pth files.

With regard to models, our repository currently only supports DINO, but other models are easy to add (see the get_model function in extract_utils.py). The DINO model is downloaded automatically using torch.hub.

Here is an example using dino_vits16:

python extract.py extract_features \
    --images_list "./data/VOC2012/lists/images.txt" \
    --images_root "./data/VOC2012/images" \
    --output_dir "./data/VOC2012/features/dino_vits16" \
    --model_name dino_vits16 \
    --batch_size 1
Step 2: Eigenvector Computation

Second, we extract eigenvectors from our features and save them to .pth files.

Here, we extract the top K=5 eigenvectors of the Laplacian matrix of our features:

python extract.py extract_eigs \
    --images_root "./data/VOC2012/images" \
    --features_dir "./data/VOC2012/features/dino_vits16" \
    --which_matrix "laplacian" \
    --output_dir "./data/VOC2012/eigs/laplacian" \
    --K 5

The final data structure after extracting eigenvectors looks like:

data
β”œβ”€β”€ VOC2012
β”‚Β Β  β”œβ”€β”€ eigs
β”‚   β”‚   └── {outpur_dir_name}
β”‚   β”‚       └── {image_id}.pth
β”‚Β Β  β”œβ”€β”€ features
β”‚   β”‚   └── {model_name}
β”‚   β”‚       └── {image_id}.pth
β”‚Β Β  β”œβ”€β”€ images
β”‚   β”‚   └── {image_id}.jpg
β”‚   └── lists
β”‚       └── images.txt
└── VOC2007
    └── ...

At this point, you are ready to use the eigenvectors for downstream tasks such as object localization, object segmentation, and semantic segmentation.

Object Localization

First, clone the dino repo inside this project root (or symlink it).

git clone https://github.com/facebookresearch/dino

Run the steps above to save your eigenvectors inside a directory, which we will now call ${EIGS_DIR}. You can then move to the object-localization directory and evaluate object localization with:

python main.py \
    --eigenseg \
    --precomputed_eigs_dir ${EIGS_DIR} \
    --dataset VOC12 \
    --name "example_eigs"

Object Segmentation

To perform object segmentation (i.e. single-region segmentations), you first extract features and eigenvectors (as described above). You then extract coarse (i.e. patch-level) single-region segmentations from the eigenvectors, and then turn these into high-resolution segmentations using a CRF.

Below, we will give example commands for the CUB bird dataset (CUB_200_2011). To download this dataset, as well as the three other object segmentation datasets used in our paper, you can follow the instructions in unsupervised-image-segmentation. Then make sure to specify the data_root parameter in the config/eval.yaml.

For example:

# Example dataset
DATASET=CUB_200_2011

# Features
python extract.py extract_features \
    --images_list "./data/object-segmentation/${DATASET}/lists/images.txt" \
    --images_root "./data/object-segmentation/${DATASET}/images" \
    --output_dir "./data/object-segmentation/${DATASET}/features/dino_vits16" \
    --model_name dino_vits16 \
    --batch_size 1

# Eigenvectors
python extract.py extract_eigs \
    --images_root "./data/object-segmentation/${DATASET}/images" \
    --features_dir "./data/object-segmentation/${DATASET}/features/dino_vits16/" \
    --which_matrix "laplacian" \
    --output_dir "./data/object-segmentation/${DATASET}/eigs/laplacian_dino_vits16" \
    --K 2 \


# Extract single-region segmentatiosn
python extract.py extract_single_region_segmentations \
    --features_dir "./data/object-segmentation/${DATASET}/features/dino_vits16" \
    --eigs_dir "./data/object-segmentation/${DATASET}/eigs/laplacian_dino_vits16" \
    --output_dir "./data/object-segmentation/${DATASET}/single_region_segmentation/patches/laplacian_dino_vits16"

# With CRF
# Optionally, you can also use `--multiprocessing 64` to speed up computation by running on 64 processes
python extract.py extract_crf_segmentations \
    --images_list "./data/object-segmentation/${DATASET}/lists/images.txt" \
    --images_root "./data/object-segmentation/${DATASET}/images" \
    --segmentations_dir "./data/object-segmentation/${DATASET}/single_region_segmentation/patches/laplacian_dino_vits16" \
    --output_dir "./data/object-segmentation/${DATASET}/single_region_segmentation/crf/laplacian_dino_vits16" \
    --downsample_factor 16 \
    --num_classes 2

After this extraction process, you should have a file with full-resolution segmentations. Then to evaluate on object segmentation, you can move into the object-segmentation directory and run python main.py. For example:

python main.py predictions.root="./data/object-segmentation" predictions.run="single_region_segmentation/crf/laplacian_dino_vits16"

By default, this assumes that all four object segmentations are available. To run on a custom dataset or only a subset of these datasets, simply edit configs/eval.yaml.

Also, if you want to visualize your segmentations, you should be able to use streamlit run extract.py vis_segmentations (after installing streamlit).

Semantic Segmentation

For semantic segmentation, we provide full instructions in the semantic-segmentation subfolder.

Acknowledgements

L. M. K. acknowledges the generous support of the Rhodes Trust. C. R. is supported by Innovate UK (project 71653) on behalf of UK Research and Innovation (UKRI) and by the European Research Council (ERC) IDIU-638009. I. L. and A. V. are supported by the VisualAI EPSRC programme grant (EP/T028572/1).

We would like to acknowledge LOST (paper and code), whose code we adapt for our object localization experiments. If you are interested in object localization, we suggest checking out their work!

Citation

@inproceedings{
    melaskyriazi2022deep,
    title={Deep Spectral Methods: A Surprisingly Strong Baseline for Unsupervised Semantic Segmentation and Localization}
    author={Luke Melas-Kyriazi and Christian Rupprecht and Iro Laina and Andrea Vedaldi}
    year={2022}
    booktitle={CVPR}
}

More Repositories

1

EfficientNet-PyTorch

A PyTorch implementation of EfficientNet and EfficientNetV2 (coming soon!)
Python
7,614
star
2

PyTorch-Pretrained-ViT

Vision Transformer (ViT) in PyTorch
Python
681
star
3

realfusion

Python
504
star
4

do-you-even-need-attention

Exploring whether attention is necessary for vision transformers
Python
476
star
5

projection-conditioned-point-cloud-diffusion

Official code for "Projection-Conditioned Point Cloud Diffusion for Single-Image 3D Reconstruction"
Python
119
star
6

pytorch-pretrained-gans

Pretrained GANs in PyTorch: StyleGAN2, BigGAN, BigBiGAN, SAGAN, SNGAN, SelfCondGAN, and more
Python
113
star
7

Automatic-Image-Colorization

Automatic image colorization with a deep convolutional neural network
Python
113
star
8

image-paragraph-captioning

[EMNLP 2018] Training for Diversity in Image Paragraph Captioning
Python
85
star
9

unsupervised-image-segmentation

[ICLR 2022] Finding an Unsupervised Image Segmenter in each of your Deep Generative Models
Python
73
star
10

simple-bert

A simple PyTorch implementation of BERT, complete with pretrained models and training scripts.
Python
37
star
11

pixmatch

Python
35
star
12

mtob

Shell
24
star
13

Poker-Bot-with-Genetic-Algorithms

A final project for Math 153 (Evolutionary Dynamics) at Harvard University
Python
23
star
14

Machine-Translation

Machine translation with recurrent neural networks
Python
7
star
15

lukemelas.github.io

HTML
3
star
16

1000-Genomes-Project-Analysis

An analysis of the 1000 Genomes Project in PyTorch
Jupyter Notebook
2
star
17

lukemelas.github.io-src

HTML
2
star
18

CS-282-Project

CS 282 Project
Jupyter Notebook
2
star
19

Language-Modeling

Language modeling on the Penn Treebank dataset
Python
2
star
20

CS-222-Fall-2018

Problem sets from CS 222 Fall 2018
Jupyter Notebook
1
star
21

CS-282-Fall-2018

Problem sets from CS 282 Fall 2018
Jupyter Notebook
1
star
22

CS-244-Project

Like blockchain but deeper
1
star
23

CS-222-Project

CS 222 Project
Jupyter Notebook
1
star
24

tada

A repository for translation as data augmentation
1
star