• Stars
    star
    420
  • Rank 103,194 (Top 3 %)
  • Language
    Python
  • License
    MIT License
  • Created almost 3 years ago
  • Updated 10 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

[IGARSS'22]: A Transformer-Based Siamese Network for Change Detection

PWC PWC

ChangeFormer: A Transformer-Based Siamese Network for Change Detection

Wele Gedara Chaminda Bandara, and Vishal M. Patel

๐Ÿ“” Accepted for publication at IGARSS-22, Kuala Lumpur, Malaysia.

Here, we provide the pytorch implementation of the paper: A Transformer-Based Siamese Network for Change Detection.

๐Ÿ“” For more information, please see our paper at arxiv and Video on YouTube.

Updates

โšก Our new Semi-supervised Change Detection paper is now online: SemiCD
โšก ChangeFormer has been accepted for publication at IGARSS-22, Kuala Lumpur, Malaysia.

๐Ÿ’ฌ Network Architecture

image-20210228153142126

๐Ÿ’ฌ Quantitative & Qualitative Results on LEVIR-CD and DSIFN-CD

image-20210228153142126

๐Ÿ’ฌ Requirements

Python 3.8.0
pytorch 1.10.1
torchvision 0.11.2
einops  0.3.2

Please see requirements.txt for all the other requirements. You can create a virtual conda environment named ChangeFormer with the following cmd:

conda create --name ChangeFormer --file requirements.txt
conda activate ChangeFormer

๐Ÿ’ฌ Installation

Clone this repo:

git clone https://github.com/wgcban/ChangeFormer.git
cd ChangeFormer

๐Ÿ’ฌ Quick Start on LEVIR dataset

We have some samples from the LEVIR-CD dataset in the folder samples_LEVIR for a quick start.

Firstly, you can download our ChangeFormerV6 pretrained modelโ€”โ€”by DropBox or Github. After downloaded the pretrained model, you can put it in checkpoints/ChangeFormer_LEVIR/.

Then, run a demo to get started as follows:

python demo_LEVIR.py

After that, you can find the prediction results in samples/predict_LEVIR.

๐Ÿ’ฌ Quick Start on DSIFN dataset

We have some samples from the DSIFN-CD dataset in the folder samples_DSIFN for a quick start.

Firstly, you can download our ChangeFormerV6 pretrained modelโ€”โ€”by DropBox or Github. After downloaded the pretrained model, you can put it in checkpoints/ChangeFormer_DSIFN/.

Then, run a demo to get started as follows:

python demo_DSIFN.py

After that, you can find the prediction results in samples/predict_DSIFN.

๐Ÿ’ฌ Train on LEVIR-CD

When we initialy train our ChangeFormer, we initialized some parameters of the network with a model pre-trained on the RGB segmentation (ADE 160k dataset) to get faster convergence.

You can download the pre-trained model here.

wget https://www.dropbox.com/s/undtrlxiz7bkag5/pretrained_changeformer.pt

Then, update the path to the pre-trained model by updating the path argument in the run_ChangeFormer_LEVIR.sh. Here:

pretrain=pretrained_changeformer/pretrained_changeformer.pt

You can find the training script run_ChangeFormer_LEVIR.sh in the folder scripts. You can run the script file by sh scripts/run_ChangeFormer_LEVIR.sh in the command environment.

The detailed script file run_ChangeFormer_LEVIR.sh is as follows:

#!/usr/bin/env bash

#GPUs
gpus=0

#Set paths
checkpoint_root=/media/lidan/ssd2/ChangeFormer/checkpoints
vis_root=/media/lidan/ssd2/ChangeFormer/vis
data_name=LEVIR


img_size=256    
batch_size=16   
lr=0.0001         
max_epochs=200
embed_dim=256

net_G=ChangeFormerV6        #ChangeFormerV6 is the finalized verion

lr_policy=linear
optimizer=adamw                 #Choices: sgd (set lr to 0.01), adam, adamw
loss=ce                         #Choices: ce, fl (Focal Loss), miou
multi_scale_train=True
multi_scale_infer=False
shuffle_AB=False

#Initializing from pretrained weights
pretrain=/media/lidan/ssd2/ChangeFormer/pretrained_segformer/segformer.b2.512x512.ade.160k.pth

#Train and Validation splits
split=train         #train
split_val=test      #test, val
project_name=CD_${net_G}_${data_name}_b${batch_size}_lr${lr}_${optimizer}_${split}_${split_val}_${max_epochs}_${lr_policy}_${loss}_multi_train_${multi_scale_train}_multi_infer_${multi_scale_infer}_shuffle_AB_${shuffle_AB}_embed_dim_${embed_dim}

CUDA_VISIBLE_DEVICES=1 python main_cd.py --img_size ${img_size} --loss ${loss} --checkpoint_root ${checkpoint_root} --vis_root ${vis_root} --lr_policy ${lr_policy} --optimizer ${optimizer} --pretrain ${pretrain} --split ${split} --split_val ${split_val} --net_G ${net_G} --multi_scale_train ${multi_scale_train} --multi_scale_infer ${multi_scale_infer} --gpu_ids ${gpus} --max_epochs ${max_epochs} --project_name ${project_name} --batch_size ${batch_size} --shuffle_AB ${shuffle_AB} --data_name ${data_name}  --lr ${lr} --embed_dim ${embed_dim}

๐Ÿ’ฌ Train on DSIFN-CD

Follow the similar procedure mentioned for LEVIR-CD. Use run_ChangeFormer_DSIFN.sh in scripts folder to train on DSIFN-CD.

๐Ÿ’ฌ Evaluate on LEVIR

You can find the evaluation script eval_ChangeFormer_LEVIR.sh in the folder scripts. You can run the script file by sh scripts/eval_ChangeFormer_LEVIR.sh in the command environment.

The detailed script file eval_ChangeFormer_LEVIR.sh is as follows:

#!/usr/bin/env bash

gpus=0

data_name=LEVIR
net_G=ChangeFormerV6 #This is the best version
split=test
vis_root=/media/lidan/ssd2/ChangeFormer/vis
project_name=CD_ChangeFormerV6_LEVIR_b16_lr0.0001_adamw_train_test_200_linear_ce_multi_train_True_multi_infer_False_shuffle_AB_False_embed_dim_256
checkpoints_root=/media/lidan/ssd2/ChangeFormer/checkpoints
checkpoint_name=best_ckpt.pt
img_size=256
embed_dim=256 #Make sure to change the embedding dim (best and default = 256)

CUDA_VISIBLE_DEVICES=0 python eval_cd.py --split ${split} --net_G ${net_G} --embed_dim ${embed_dim} --img_size ${img_size} --vis_root ${vis_root} --checkpoints_root ${checkpoints_root} --checkpoint_name ${checkpoint_name} --gpu_ids ${gpus} --project_name ${project_name} --data_name ${data_name}

๐Ÿ’ฌ Evaluate on DSIFN

Follow the same evaluation procedure mentioned for LEVIR-CD. You can find the evaluation script eval_ChangeFormer_DSFIN.sh in the folder scripts. You can run the script file by sh scripts/eval_ChangeFormer_DSIFN.sh in the command environment.

๐Ÿ’ฌ Dataset Preparation

๐Ÿ‘‰ Data structure

"""
Change detection data set with pixel-level binary labels๏ผ›
โ”œโ”€A
โ”œโ”€B
โ”œโ”€label
โ””โ”€list
"""

A: images of t1 phase;

B:images of t2 phase;

label: label maps;

list: contains train.txt, val.txt and test.txt, each file records the image names (XXX.png) in the change detection dataset.

๐Ÿ‘‰ Links to download processed datsets used for train/val/test

You can download the processed LEVIR-CD and DSIFN-CD datasets by the DropBox through the following here:

Since the file sizes are large, I recommed to use command line and cosider downloading the zip file as follows (in linux):

To download LEVIR-CD dataset run following command in linux-terminal:

wget https://www.dropbox.com/s/h9jl2ygznsaeg5d/LEVIR-CD-256.zip

To download DSIFN-CD dataset run following command in linux-terminal:

wget https://www.dropbox.com/sh/i54h8kkpgar1s07/AAA0rBAFl9UZ3U3Z1_o46UT0a/DSIFN-CD-256.zip

For your reference, I have also attached the inks to original LEVIR-CD and DSIFN-CD here: LEVIR-CD and DSIFN-CD.

๐Ÿ’ฌ License

Code is released for non-commercial and research purposes only. For commercial purposes, please contact the authors.

๐Ÿ’ฌ Citation

If you use this code for your research, please cite our paper:

@misc{bandara2022transformerbased,
      title={A Transformer-Based Siamese Network for Change Detection}, 
      author={Wele Gedara Chaminda Bandara and Vishal M. Patel},
      year={2022},
      eprint={2201.01293},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

๐Ÿ’ฌ References

Appreciate the work from the following repositories:

More Repositories

1

ddpm-cd

Remote Sensing Change Detection using Denoising Diffusion Probabilistic Models
Python
260
star
2

HyperTransformer

[CVPR'22] HyperTransformer: A Textural and Spectral Feature Fusion Transformer for Pansharpening
Python
124
star
3

SemiCD

Revisiting Consistency Regularization for Semi-supervised Change Detection in Remote Sensing Images
Python
120
star
4

SPIN_RoadMapper

Official implementation of our ICRA'22 paper: SPIN Road Mapper: Extracting Roads from Aerial Images via Spatial and Interaction Space Graph Reasoning for Autonomous Driving
Jupyter Notebook
79
star
5

adamae

[CVPR'23] AdaMAE: Adaptive Masking for Efficient Spatiotemporal Learning with Masked Autoencoders
Python
71
star
6

DIP-HyperKite

[IEEE TGRS] DIP-HyperKite: Hyperspectral Pansharpening Based on Improved Deep Image Prior and Residual Reconstruction
Python
56
star
7

Metric-CD

Official PyTorch implementation of Deep Metric Learning for Unsupervised Change Detection in Remote Sensing Images
Jupyter Notebook
15
star
8

mix-bt

Official PyTorch Implementation of Guarding Barlow Twins Against Overfitting with Mixed Samples
Python
13
star
9

apt

PyTorch Implementation of Attention Prompt Tuning: Parameter-Efficient Adaptation of Pre-Trained Models for Action Recognition
Python
13
star
10

CD-SOTA-methods

Remote sensing change detection: state of the art methods and datasets
6
star
11

Complete_State-Estimation_Algorithm

A Complete State Estimation Algorithm for a Three-Phase Four-Wire Low Voltage Distribution System with High Penetration of Solar PV
Jupyter Notebook
5
star
12

DiffuseDenoiseCount

Python
1
star