Hyperspectral Image Classification
This repository implementates 6 frameworks for hyperspectral image classification based on PyTorch and sklearn.
The detailed results can be seen in the Classification of Hyperspectral Image Based on Double-Branch Dual-Attention Mechanism Network.
Some of our code references the projects
- Dual-Attention-Network
- Remote sensing image classification
- A Fast Dense Spectral-Spatial Convolution Network Framework for Hyperspectral Images Classification
If our code is helpful to you, please cite
Li R, Zheng S, Duan C, et al. Classification of Hyperspectral Image Based on Double-Branch Dual-Attention Mechanism Network[J]. Remote Sensing, 2020, 12(3): 582.
Requirements:
numpy >= 1.16.5
PyTorch >= 1.3.1
sklearn >= 0.20.4
Datasets:
You can download the hyperspectral datasets in mat format at: http://www.ehu.eus/ccwintco/index.php/Hyperspectral_Remote_Sensing_Scenes, and move the files to ./datasets
folder.
Usage:
- Set the percentage of training and validation samples by the
load_dataset
function in the file./global_module/generate_pic.py
. - Taking the DBDA framework as an example, run
./DBDA/main.py
and type the name of dataset. - The classfication maps are obtained in
./DBDA/classification_maps
folder, and accuracy result is generated in./DBDA/records
folder.
Network:
Figure 1. The structure of the DBDA network. The upper Spectral Branch composed of the dense
spectral block and channel attention block is designed to capture spectral features. The lower Spatial
Branch constituted by dense spatial block, and spatial attention block is designed to exploit spatial
features.
Results:
Figure 2. Classification maps for the IP dataset using 3% training samples. (a) False-color image. (b) Ground-truth (GT). (c)–(h) The classification maps with disparate algorithms. Figure 3. Classification maps for the UP dataset using 0.5% training samples. (a) False-color image. (b) Ground-truth (GT). (c)–(h) The classification maps with disparate algorithms. Figure 4. Classification maps for the UP dataset using 0.5% training samples. (a) False-color image. (b) Ground-truth (GT). (c)–(h) The classification maps with disparate algorithms. Figure 5. Classification maps for the BS dataset using 1.2% training samples. (a) False-color image. (b) Ground-truth (GT). (c)–(h) The classification maps with disparate algorithms.