• Stars
    star
    127
  • Rank 282,790 (Top 6 %)
  • Language
    Python
  • License
    MIT License
  • Created about 3 years ago
  • Updated about 2 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Official implementation for the paper: "Multi-label Classification with Partial Annotations using Class-aware Selective Loss"

PWC

Multi-label Classification with Partial Annotations using Class-aware Selective Loss


Paper | Pretrained models | OpenImages download

Official PyTorch Implementation

Emanuel Ben-Baruch, Tal Ridnik, Itamar Friedman, Avi Ben-Cohen, Nadav Zamir, Asaf Noy, Lihi Zelnik-Manor
DAMO Academy, Alibaba Group

Abstract

Large-scale multi-label classification datasets are commonly, and perhaps inevitably, partially annotated. That is, only a small subset of labels are annotated per sample. Different methods for handling the missing labels induce different properties on the model and impact its accuracy. In this work, we analyze the partial labeling problem, then propose a solution based on two key ideas. First, un-annotated labels should be treated selectively according to two probability quantities: the class distribution in the overall dataset and the specific label likelihood for a given data sample. We propose to estimate the class distribution using a dedicated temporary model, and we show its improved efficiency over a naive estimation computed using the dataset's partial annotations. Second, during the training of the target model, we emphasize the contribution of annotated labels over originally un-annotated labels by using a dedicated asymmetric loss. Experiments conducted on three partially labeled datasets, OpenImages, LVIS, and simulated-COCO, demonstrate the effectiveness of our approach. Specifically, with our novel selective approach, we achieve state-of-the-art results on OpenImages dataset.

Direct OpenImages Download is Now Available.

We provide direct and convenient access for the OpenImages (V6) dataset. This will enable a common and reproducible baseline for benchmarking and future research. See further details here.

Class-aware Selective Approach

An overview of our approach is summarized in the following figure:

Loss Implementation

Our loss consists of a selective approach that adjusts the training mode for each class individually and a partial asymmetric loss.

An implementation of the Class-aware Selective Loss (CSL) can be found here.

  • class PartialSelectiveLoss(nn.Module)

Pretrained Models

We provide models pretrained on the OpenImages dataset with different partial training-modes and architectures:

Model Architecture Link mAP
Ignore TResNet-M link 85.38
Negative TResNet-M link 85.85
Selective (CSL) TResNet-M link 86.72
Selective (CSL) TResNet-L link 87.34

Inference Code (Demo)

We provide inference code, that demonstrates how to load the model, pre-process an image and do inference. An example run of OpenImages model (after downloading the relevant model):

python infer.py  \
--dataset_type=OpenImages \
--model_name=tresnet_m \
--model_path=./models_local/mtresnet_opim_86.72.pth \
--pic_path=./pics/10162266293_c7634cbda9_o.jpg \
--input_size=224

Result Examples

Training Code

Training code is provided in train.py. Also, code for simulating partial annotation for the MS-COCO dataset is available (coco_simulation). In particular, two "partial" simulation schemes are implemented: fix-per-class(FPC) and random-per-sample (RPS).

  • FPC: For each class, we randomly sample a fixed number of positive annotations and the same number of negative annotations. The rest of the annotations are dropped.
  • RPS: We omit each annotation with probability p.

Pretrained weights using the ImageNet-21k dataset can be found here: link
Pretrained weights using the ImageNet-1k dataset can be found here: link

Example of training with RPS simulation:

--data=/datasets/COCO/COCO_2014
--model-path=models/pretrain/mtresnet_21k
--gamma_pos=0
--gamma_neg=1
--gamma_unann=4
--simulate_partial_type=rps
--simulate_partial_param=0.5
--partial_loss_mode=selective
--likelihood_topk=5
--prior_threshold=0.5
--prior_path=./outputs/priors/prior_fpc_1000.csv

Example of training with FPC simulation:

--data=/mnt/datasets/COCO/COCO_2014
--model-path=models/pretrain/mtresnet_21k
--gamma_pos=0
--gamma_neg=3
--gamma_unann=4
--simulate_partial_type=fpc
--simulate_partial_param=1000
--partial_loss_mode=selective
--likelihood_topk=5
--prior_threshold=0.5
--prior_path=./outputs/priors/prior_fpc_1000.csv

Typical Training Results

FPC (1,000) simulation scheme:

Model mAP
Ignore, CE 76.46
Negative, CE 81.24
Negative, ASL (4,1) 81.64
CSL - Selective, P-ASL(4,3,1) 83.44

RPS (0.5) simulation scheme:

Model mAP
Ignore, CE 84.90
Negative, CE 81.21
Negative, ASL (4,1) 81.91
CSL- Selective, P-ASL(4,1,1) 85.21

Estimating the Class Distribution

The training code contains also the procedure for estimating the class distribution from the data. Our approach enables us to rank the classes based on predictions of a temporary model trained using the Ignore mode. link

Top 10 classes:

Method Top 10 ranked classes
Original 'person', 'chair', 'car', 'dining table', 'cup', 'bottle', 'bowl', 'handbag', 'truck', 'backpack'
Estiimate (Ignore mode) 'person', 'chair', 'handbag', 'cup', 'bench', 'bottle', 'backpack', 'car', 'cell phone', 'potted plant'
Estimate (Negative mode) 'kite' 'truck' 'carrot' 'baseball glove' 'tennis racket' 'remote' 'cat' 'tie' 'horse' 'boat'

Citation

@misc{benbaruch2021multilabel,
      title={Multi-label Classification with Partial Annotations using Class-aware Selective Loss}, 
      author={Emanuel Ben-Baruch and Tal Ridnik and Itamar Friedman and Avi Ben-Cohen and Nadav Zamir and Asaf Noy and Lihi Zelnik-Manor},
      year={2021},
      eprint={2110.10955},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

Acknowledgements

Several images from OpenImages dataset are used in this project. Some components of this code implementation are adapted from the repository https://github.com/Alibaba-MIIL/ASL.

More Repositories

1

ASL

Official Pytorch Implementation of: "Asymmetric Loss For Multi-Label Classification"(ICCV, 2021) paper
Python
715
star
2

ImageNet21K

Official Pytorch Implementation of: "ImageNet-21K Pretraining for the Masses"(NeurIPS, 2021) paper
Python
713
star
3

TResNet

Official Pytorch Implementation of "TResNet: High-Performance GPU-Dedicated Architecture" (WACV 2021)
Python
465
star
4

ML_Decoder

Official PyTorch implementation of "ML-Decoder: Scalable and Versatile Classification Head" (2021)
Python
314
star
5

STAM

Official implementation of "An Image is Worth 16x16 Words, What is a Video Worth?" (2021 paper)
Python
219
star
6

Solving_ImageNet

Official PyTorch implementation of the paper: "Solving ImageNet: a Unified Scheme for Training any Backbone to Top Results" (2022)
Python
190
star
7

AudioClassfication

Python
75
star
8

HardCoReNAS

Python
34
star
9

HeadSharingKD

Implementation of the paper "It's All in the Head: Representation Knowledge Distillation through Classifier Sharing"
Python
34
star
10

ZS_SDL

Official Pytorch Implementation of: "Semantic Diversity Learning for Zero-Shot Multi-label Classification"(ICCV, 2021) paper
Python
28
star
11

PETA

Official Pytorch Implementation of "PETA: Photo Albums Event Recognition using Transformers Attention" (2021)
Python
18
star
12

CobBO

Coordinate Backoff Bayesian Optimization
Python
9
star
13

alibaba-miil.github.io

Curated list of miil papers
7
star
14

knapsack_pruning

Python
3
star
15

BINAS

Constructing interpretable bilinear accuracy predictors to serve as an objective function for an IQCQP problem that represents NAS under latency constraints and solve it with efficient algorithms.
Python
3
star