R2CNN_HEAD (The paper is under review.): Position Detection and Direction Prediction for Arbitrary-Oriented Ships via Multiscale Rotation Region Convolutional Neural Network
https://github.com/DetectionTeamUCAS
Recommend improved code๏ผA Tensorflow implementation of FPN or R2CNN detection framework based on FPN.
You can refer to the papers R2CNN Rotational Region CNN for Orientation Robust Scene Text Detection or Feature Pyramid Networks for Object Detection
Other rotation detection method reference R-DFPN, RRPN and R2CNN
If useful to you, please star to support my work. Thanks.
Citation
Some relevant achievements based on this code.
@article{[yang2018position](https://ieeexplore.ieee.org/document/8464244),
title={Position Detection and Direction Prediction for Arbitrary-Oriented Ships via Multitask Rotation Region Convolutional Neural Network},
author={Yang, Xue and Sun, Hao and Sun, Xian and Yan, Menglong and Guo, Zhi and Fu, Kun},
journal={IEEE Access},
volume={6},
pages={50839-50849},
year={2018},
publisher={IEEE}
}
@article{[yang2018r-dfpn](http://www.mdpi.com/2072-4292/10/1/132),
title={Automatic ship detection in remote sensing images from google earth of complex scenes based on multiscale rotation dense feature pyramid networks},
author={Yang, Xue and Sun, Hao and Fu, Kun and Yang, Jirui and Sun, Xian and Yan, Menglong and Guo, Zhi},
journal={Remote Sensing},
volume={10},
number={1},
pages={132},
year={2018},
publisher={Multidisciplinary Digital Publishing Institute}
}
Configuration Environment
ubuntu(Encoding problems may occur on windows) + python2 + tensorflow1.2 + cv2 + cuda8.0 + GeForce GTX 1080
If you want to use cpu, you need to modify the parameters of NMS and IOU functions use_gpu = False in cfgs.py
You can also use docker environment, command: docker pull yangxue2docker/tensorflow3_gpu_cv2_sshd:v1.0
Installation
Clone the repository
git clone https://github.com/yangxue0827/R2CNN_HEAD_FPN_Tensorflow.git
Make tfrecord
The data is VOC format, reference here
data path format ($R2CNN_HEAD_ROOT/data/io/divide_data.py)
โโโ VOCdevkit
โย ย โโโ VOCdevkit_train
โย ย โโโ Annotation
โย ย โโโ JPEGImages
โ โโโ VOCdevkit_test
โย ย โโโ Annotation
โย ย โโโ JPEGImages
Clone the repository
cd $R2CNN_HEAD_ROOT/data/io/
python convert_data_to_tfrecord.py --VOC_dir='***/VOCdevkit/VOCdevkit_train/' --save_name='train' --img_format='.jpg' --dataset='ship'
Compile
cd $PATH_ROOT/libs/box_utils/
python setup.py build_ext --inplace
Demo
1ใUnzip the weight $R2CNN_HEAD_ROOT/output/res101_trained_weights/*.rar
2ใput images in $R2CNN_HEAD_ROOT/tools/inference_image
3ใConfigure parameters in $R2CNN_HEAD_ROOT/libs/configs/cfgs.py and modify the project's root directory
4ใ
cd $R2CNN_HEAD_ROOT/tools
5ใimage slice
python inference.py
6ใbig image
cd $FPN_ROOT/tools
python demo.py --src_folder=.\demo_src --des_folder=.\demo_des
Train
1ใModify $R2CNN_HEAD_ROOT/libs/lable_name_dict/***_dict.py, corresponding to the number of categories in the configuration file
2ใdownload pretrain weight(resnet_v1_101_2016_08_28.tar.gz or resnet_v1_50_2016_08_28.tar.gz) from here, then extract to folder $R2CNN_HEAD_ROOT/data/pretrained_weights
3ใ
cd $R2CNN_HEAD_ROOT/tools
python train.py
Test tfrecord
cd $R2CNN_HEAD_ROOT/tools
python test.py
here)
eval(Not recommended, Please refercd $R2CNN_HEAD_ROOT/tools
python eval.py
Summary
tensorboard --logdir=$R2CNN_HEAD_ROOT/output/res101_summary/