• Stars
    star
    517
  • Rank 85,558 (Top 2 %)
  • Language
    Python
  • Created over 4 years ago
  • Updated 7 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Camouflaged Object Detection, CVPR 2020 (Oral)

Camouflaged Object Detection (CVPR2020-Oral)

Authors: Deng-Ping Fan, Ge-Peng Ji, Guolei Sun, Ming-Ming Cheng, Jianbing Shen, Ling Shao.

0. Preface

  • Welcome to joint the COD community! We create a group chat in WeChat, you can join it via adding contact (WeChat ID: CVer222). Please attach your affiliations.

  • This repository includes detailed introduction, strong baseline (Search & Identification Net, SINet), and one-key evaluation codes for Camouflaged Object Detection (COD).

  • For more information about Camouflaged Object Detection, please visit our Project Page and read the Manuscript (PDF) / Chinese Version (PDF).

  • If you have any questions about our paper, feel free to contact Deng-Ping Fan or Ge-Peng Ji via E-mail. And if you are using SINet or evaluation toolbox for your research, please cite this paper.

0.1. ๐Ÿ”ฅ NEWS ๐Ÿ”ฅ

  • [2021/07/07] ๐Ÿ’ฅ The latest enhanced version of SINet is coming, which is accepted at IEEE TPAMI 2021 (Paper | GitHub). The SINet-V2 can surpass the performance of existing COD methods by a large margin, while maintaining real-time inference.
  • [2020/10/22] ๐Ÿ’ฅ Training code could be avaliable via email ([email protected]). Please provide your Name & Institution. Please note the code can be only used for research purpose.
  • [2020/11/21] Upadted evaluated tool: Bi_cam(cam>threshold)=1 -> Bi_cam(cam>=threshold)=1;
  • [2020/10/22] For eq (4): j = k+1, M -> j = m, k-1. (note that m is a specific layer, in our paper it should be equal to 1).
  • [2020/09/09] SINet is the best method on the open benchmark website (https://paperswithcode.com/task/camouflaged-object-segmentation).
  • [2020/08/27] Updated the describtion in Table 3 (Baseline models are trained using the training setting (iii) rather than (iv)).
  • [2020/08/05] Online demo has been released! (http://mc.nankai.edu.cn/cod).
  • [2020/06/11] We re-organize the training set, listed in 2.2. Usage section, please download it again.
  • [2020/05/05] ๐Ÿ’ฅ Release testing code.
  • [2020/04/25] Training/Testing code will be updated soon ...

0.2. Table of Contents

0.3. File Structure

SINet
โ”œโ”€โ”€ EvaluationTool
โ”‚ย ย  โ”œโ”€โ”€ CalMAE.m
โ”‚ย ย  โ”œโ”€โ”€ Enhancedmeasure.m
โ”‚ย ย  โ”œโ”€โ”€ Fmeasure_calu.m
โ”‚ย ย  โ”œโ”€โ”€ main.m
โ”‚ย ย  โ”œโ”€โ”€ original_WFb.m
โ”‚ย ย  โ”œโ”€โ”€ S_object.m
โ”‚ย ย  โ”œโ”€โ”€ S_region.m
โ”‚ย ย  โ””โ”€โ”€ StructureMeasure.m
โ”œโ”€โ”€ Images
โ”‚ย ย  โ”œโ”€โ”€ CamouflagedTask.png
โ”‚ย ย  โ”œโ”€โ”€ CamouflagingFromMultiView.png
โ”‚ย ย  โ”œโ”€โ”€ CmpResults.png
โ”‚ย ย  โ”œโ”€โ”€ COD10K-2.png
โ”‚ย ย  โ”œโ”€โ”€ COD10K-3.png
โ”‚ย ย  โ”œโ”€โ”€ COVID'19-Infection.png
โ”‚ย ย  โ”œโ”€โ”€ locust detection.png
โ”‚ย ย  โ”œโ”€โ”€ new_score_1.png
โ”‚ย ย  โ”œโ”€โ”€ PolypSegmentation.png
โ”‚ย ย  โ”œโ”€โ”€ QuantitativeResults-new.png
โ”‚ย ย  โ”œโ”€โ”€ SampleAquaticAnimals.png
โ”‚ย ย  โ”œโ”€โ”€ Search-and-Rescue.png
โ”‚ย ย  โ”œโ”€โ”€ SINet.png
โ”‚ย ย  โ”œโ”€โ”€ SubClassResults-1.png
โ”‚ย ย  โ”œโ”€โ”€ SubClassResults.png
โ”‚ย ย  โ”œโ”€โ”€ Surface defect Detection2.png
โ”‚ย ย  โ”œโ”€โ”€ TaskRelationship.png
โ”‚ย ย  โ”œโ”€โ”€ Telescope.png
โ”‚ย ย  โ””โ”€โ”€ UnderwaterEnhancment.png
โ”œโ”€โ”€ MyTest.py
โ”œโ”€โ”€ README.md
โ”œโ”€โ”€ requirement.txt
โ””โ”€โ”€ Src
    โ”œโ”€โ”€ backbone
    โ”œโ”€โ”€ __init__.py
    โ”œโ”€โ”€ SearchAttention.py
    โ”œโ”€โ”€ SINet.py
    โ””โ”€โ”€ utils

1. Task Relationship


Figure 1: Task relationship. Given an input image (a), we present the ground-truth for (b) panoptic segmentation (which detects generic objects including stuff and things), (c) salient object detection (which detects isolated objects that grasp human attention), and (d) the proposed concealed object detection task, where the goal is to detect objects that have a similar pattern to the natural habitat. In this example, the boundaries of the two butterflies are blended with the bananas, making them difficult to identify..


Figure 2: Given an input image (a), we present the ground-truth for (b) panoptic segmentation (which detects generic objects including stuff and things), (c) salient instance/object detection (which detects objects that grasp human attention), and (d) the proposed camouflaged object detection task, where the goal is to detect objects that have a similar pattern (e.g., edge, texture, or color) to the natural habitat. In this case, the boundaries of the two butterflies are blended with the bananas, making them difficult to identify. This task is far more challenging than the traditional salient object detection or generic object detection.

References of Salient Object Detection (SOD) benchmark works
[1] Video SOD: Shifting More Attention to Video Salient Object Detection. CVPR, 2019. (Project Page)
[2] RGB SOD: Salient Objects in Clutter: Bringing Salient Object Detection to the Foreground. ECCV, 2018. (Project Page)
[3] RGB-D SOD: Rethinking RGB-D Salient Object Detection: Models, Datasets, and Large-Scale Benchmarks. TNNLS, 2020. (Project Page)
[4] Co-SOD: Taking a Deeper Look at the Co-salient Object Detection. CVPR, 2020. (Project Page)

2. Proposed Baseline

2.1. Overview


Figure 3: Overview of our SINet framework, which consists of two main components: the receptive field (RF) and partial decoder component (PDC). The RF is introduced to mimic the structure of RFs in the human visual system. The PDC reproduces the search and identification stages of animal predation. SA = search attention function described in [71]. See x 4 for details.

2.2. Usage

The training and testing experiments are conducted using PyTorch with a single GeForce RTX TITAN GPU of 24 GB Memory.

Note that our model also supports low memory GPU, which means you can lower the batch size (~419 MB per image in apex-mode=O1, and ~305 MB per image in apex-mode=O2)

  1. Configuring your environment (Prerequisites):

    Note that SINet is only tested on Ubuntu OS with the following environments. It may work on other operating systems as well but we do not guarantee that it will.

    • Creating a virtual environment in terminal: conda create -n SINet python=3.6.

    • Installing necessary packages: pip install -r requirements.txt.

    • (Optional: only for training) Installing NVIDIA-Apex for accelerate training process with mixed precision. (Instructions) (Under CUDA-10.0 and Cudnn-7.4).

  1. Downloading Training and Testing Sets:
    • downloading NEW testing dataset (COD10K-test + CAMO-test + CHAMELEON) and move it into ./Dataset/TestDataset/, which can be found in this Google Drive link or Baidu Pan link with the fetch code: z83z.

    • download NEW training dataset (COD10K-train) which can be found in this Google Drive link or Baidu Pan link with the fetch code:djq2. Please refer to our original paper for other training data.

  1. Testing Configuration:

    • After you download all the pre-trained model and testing data, just run MyTest.py to generate the final prediction map: replace your trained model directory (--model_path) and assign your the save directory of the inferred mask (--test_save)

    • Note that we re-trained our model (marked as $\diamondsuit$ in the following figure) equipped with mixed training strategy of Apex lib (mode=O1) and get better performance in 40 epoch. Here we provide a new pre-trained model (Baidu Drive [fetch code:2pp2]/Google Drive) here. Later, We will try different backbones based SINet to improve performance and provide more comprehensive comparison.


  2. Evaluation your trained model:

    • One-key evaluation is written in MATLAB code (revised from link), please follow this the instructions in main.m and just run it to generate the evaluation results in ./EvaluationTool/EvaluationResults/Result-CamObjDet/.

3. Results

3.1. Qualitative Comparison


Figure 4: Qualitative results of our SINet and two top-performing baselines on COD10K. Refer to our paper for details.

3.2. Quantitative Comparison (Overall/Sub-class)


Table 1: Quantitative results on different datasets. The best scores are highlighted in bold.


Table 2: Quantitative results of Structure-measure (Sฮฑ) for each sub-class in our COD10K dataset-(1/2). The best score of each category is highlighted in bold.


Table 3: Quantitative results of Structure-measure (Sฮฑ) for each sub-class in our COD10K dataset-(2/2). The best score of each category is highlighted in bold.

3.3. Results Download

  1. Results of our SINet can be found in this download link.

  2. Performance of competing methods can be found in this download link.

4. Proposed COD10K Datasets


Figure 5: The extraction of individual samples including 20 sub-classes from our COD10K (2/5)โ€“Aquatic animals.


Figure 6: Annotation diversity and meticulousness in the proposed COD10K dataset. Instead of only providing coarse-grained object-level annotations with the three major types of bias (e.g., Watermark embedded, Coarse annotation, and Occlusion) in prior works, we offer six different annotations, which include edge-level (4rd row), object-level (5rd row), instance-level (6rd row), bounding boxes (7rd row), and attributes (8rd row). Refer to the manuscript for more attribute details.


Figure 7: Regularized quality control during our labeling reverification stage. Strictly adheres to the four major criteria of rejection or acceptance to near the ceiling of annotation accuracy.

COD10K datasets: Baidu aq4i | Google

5. Evaluation Toolbox

We provide complete and fair one-key evaluation toolbox for benchmarking within a uniform standard. Please refer to this link for more information: Matlab version: https://github.com/DengPingFan/CODToolbox Python version: https://github.com/lartpang/PySODMetrics

6. Potential Applications

  1. Medical (Polyp Segmentation and COVID-19 Infection Segmentation Diagnose) Please refer to this page (https://github.com/DengPingFan/Inf-Net) for more details.


Figure 8: Lung Infection Segmentation.

โ€‹


Figure 9: Example of COVID-19 infected regions in CT axial slice, where the red and green regions denote the GGO, and consolidation, respectively. The images are collected from here. (COVID-19 CT segmentation dataset (link: https://medicalsegmentation.com/covid19/, accessed: 2020-04-11).)

  1. Agriculture (locust detection to prevent invasion)


Figure 10: Locust disaster detection.

  1. Art (e.g., for photorealistic blending, or recreational art)


Figure 11: The answer can be found at here (Camouflaging an Object from Many Viewpoints, CVPR 2014.)

  1. Computer Vision (e.g., for search-and-rescue work, or rare species discovery)


Figure 13: Search and Rescue for saving lives.

  1. Underwater Image Enhancement


Figure 14: Please refer to "An Underwater Image Enhancement Benchmark Dataset and Beyond, TIP2019" for more details.

  1. Surface defect Detection


Figure 15: Please refer to "A review of recent advances in surface defect detection using texture analysis techniques, 2008" for more details.

## 7. User Study Test

--> Click here to explore more interest things (YouTube Link) <--

8. Citation

Please cite our paper if you find the work useful:

@inproceedings{fan2020Camouflage,
title={Camouflaged Object Detection},
author={Fan, Deng-Ping and Ji, Ge-Peng and Sun, Guolei and Cheng, Ming-Ming and Shen, Jianbing and Shao, Ling},
booktitle={IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2020}
}

9. LICENSE

  • The COD10K Dataset is made available for non-commercial purposes only.

  • You will not, directly or indirectly, reproduce, use, or convey the COD10K Dataset or any Content, or any work product or data derived therefrom, for commercial purposes.

This code is for academic communication only and not for commercial purposes. If you want to use for commercial please contact me.

Redistribution and use in source with or without modification, are permitted provided that the following conditions are met:

  • Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.

  • Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

10. Acknowledgements

We would like to thank authors of CHAMELEON, and CAMO dataset for their work. They provide tremendous efforts in these dataset to boost this field. We also appreciate image annotators and Wenguan Wang, Geng Chen, Hongsong Wang for insightful feedback and discussion.

11. TODO LIST

If you want to improve the usability or any piece of advice, please feel free to contact me directly (E-mail).

  • Support NVIDIA APEX training.

  • Support different backbones ( VGGNet, ResNet, ResNeXt Res2Net, iResNet, and ResNeSt etc.)

  • Support distributed training.

  • Support lightweight architecture and real-time inference, like MobileNet, SqueezeNet.

  • Support distributed training

  • Add more comprehensive competitors.

12. FAQ

  1. If the image cannot be loaded in the page (mostly in the domestic network situations).

    Solution Link


โฌ† back to top

More Repositories

1

PraNet

PraNet: Parallel Reverse Attention Network for Polyp Segmentation, MICCAI 2020 (Oral). Code using Jittor Framework is available.
Python
434
star
2

Inf-Net

Inf-Net: Automatic COVID-19 Lung Infection Segmentation from CT Images, IEEE TMI 2020.
Python
347
star
3

DAVSOD

Shifting More Attention to Video Salient Objection Detection, CVPR 2019 (Best Paper Finalist)
Jupyter Notebook
205
star
4

Polyp-PVT

Polyp-PVT: Polyp Segmentation with Pyramid Vision Transformers, AIR 2023.
Python
188
star
5

D3NetBenchmark

Rethinking RGB-D Salient Object Detection: Models, Datasets, and Large-Scale Benchmarks, IEEE TNNLS 2020
Python
131
star
6

CSU

Concealed Scene Understanding, Visual Intelligence (VI), 2023
Python
67
star
7

SODBenchmark

Salient objects in clutter, TPAMI, 2022
56
star
8

S-measure

Structure-measure: A New Way to Evaluate Foreground Maps, IJCV2021 (ICCV 2017-Spotlight)
MATLAB
56
star
9

CODToolbox

EvaluationToolBox for Camouflaged Object Detection Task
MATLAB
49
star
10

FSGAN

Python
48
star
11

FS2K

Python
39
star
12

BBS-Net

BBS-Net: RGB-D Salient Object Detection with a Bifurcated Backbone Strategy Network, ECCV 2020
Python
35
star
13

FaceSketch-Awesome-List

Deep Facial Synthesis: A New Challenge
29
star
14

CoEGNet

Re-thinking Co-Salient Object Detection, TPAMI 2021
Python
24
star
15

E-measure

Enhanced-alignment Measure for Binary Foreground Map Evaluation, IJCAI 2018 (Oral)
MATLAB
21
star
16

Saliency-Authors

20
star
17

SOC-DataAug

Salient Objects in Clutter, arXiv, 2021 (ECCV2018 extenstion).
Python
11
star
18

CoSOD3K

8
star
19

FS2KToolbox

MATLAB
7
star
20

Scoot

code for "Scoot: A Perceptual Metric for Facial Sketches" published in ICCV 2019
MATLAB
7
star
21

Camouflaged-Scene-Understanding

Visual Intelligence 2023-Submission
3
star
22

S-measure_cpp

C++
3
star
23

RGBDBenchmark

Rethinking RGB-D Salient Object Detection: Models, Datasets, and Large-Scale Benchmarks
3
star
24

DengPingFan.github.io

HTML
2
star
25

CoSODToolbox

CoSODToolbox
2
star
26

Picture

PostScript
2
star
27

FPM

Cuda
2
star
28

DengPingFan

DengPing Portfolio
2
star
29

Polyp-Awesome-List

1
star