• Stars
    star
    113
  • Rank 308,351 (Top 7 %)
  • Language
    Python
  • Created over 3 years ago
  • Updated 9 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

This is a PyTorch-based R-YOLOv4 implementation which combines YOLOv4 model and loss function from R3Det for arbitrary oriented object detection.

R-YOLOv4

Introduction

The objective of this project is to adapt YOLOv4 model to detecting oriented objects. As a result, modifying the original loss function of the model is required. I got a successful result by increasing the number of anchor boxes with different rotating angle and combining smooth-L1-IoU loss function proposed by R3Det: Refined Single-Stage Detector with Feature Refinement for Rotating Object into the original loss for bounding boxes.

Features


Loss Function (only for x, y, w, h, theta)

loss

angle


Setup

  1. Clone and Setup Environment

    $ git clone https://github.com/kunnnnethan/R-YOLOv4.git
    $ cd R-YOLOv4/
    

    Create Conda Environment

    $ conda env create -f environment.yml
    

    Create Python Virtual Environment

    $ python3.8 -m venv (your environment name)
    $ source ~/your-environment-name/bin/activate
    $ pip3 install torch torchvision torchaudio
    $ pip install -r requirements.txt
    
  2. Download pretrained weights
    weights

  3. Make sure your files arrangment looks like the following
    Note that each of your dataset folder in data should split into three files, namely train, test, and detect.

    R-YOLOv4/
    โ”œโ”€โ”€ train.py
    โ”œโ”€โ”€ test.py
    โ”œโ”€โ”€ detect.py
    โ”œโ”€โ”€ xml2txt.py
    โ”œโ”€โ”€ environment.xml
    โ”œโ”€โ”€ requirements.txt
    โ”œโ”€โ”€ model/
    โ”œโ”€โ”€ datasets/
    โ”œโ”€โ”€ lib/
    โ”œโ”€โ”€ outputs/
    โ”œโ”€โ”€ weights/
        โ”œโ”€โ”€ pretrained/ (for training)
        โ””โ”€โ”€ UCAS-AOD/ (for testing and detection)
    โ””โ”€โ”€ data/
        โ””โ”€โ”€ UCAS-AOD/
            โ”œโ”€โ”€ class.names
            โ”œโ”€โ”€ train/
                โ”œโ”€โ”€ ...png
                โ””โ”€โ”€ ...txt
            โ”œโ”€โ”€ test/
                โ”œโ”€โ”€ ...png
                โ””โ”€โ”€ ...txt
            โ””โ”€โ”€ detect/
                โ””โ”€โ”€ ...png
    

Train

I have implemented methods to load and train three different datasets. They are UCAS-AOD, DOTA, and custom dataset respectively. You can check out how I loaded those dataset into the model at /datasets. The angle of each bounding box is limited in (- pi/2, pi/2], and the height of each bounding box is always longer than it's width.

$ python train.py --data data/UCAS_AOD.yaml --hyp data/hyp.yaml --model_name ryolov4 --batch_size 16 --img_size 608

You can run display_inputs.py to visualize whether your data is loaded successfully.

UCAS-AOD dataset

Please refer to this repository to rearrange files so that it can be loaded and trained by this model.
You can download the weight that I trained from UCAS-AOD.

DOTA dataset

Download the official dataset from here. The original files should be able to be loaded and trained by this model.
You can download the weight that I trained from DOTA.

Train with custom dataset

  1. Use labelImg2 to help label your data. labelImg2 is capable of labeling rotated objects.
  2. Move your data folder into the R-YOLOv4/data folder.
  3. Run xml2txt.py
    1. generate txt files: python xml2txt.py --data_folder your-path --action gen_txt
    2. delete xml files: python xml2txt.py --data_folder your-path --action del_xml

A trash custom dataset that I made and the weight trained from it are provided for your convenience.

Test

python test.py --data data/UCAS_AOD.yaml --hyp data/hyp.yaml --weight_path weights/ryolov4/best.pth --batch_size 8 --img_size 608

detect

python detect.py --data data/UCAS_AOD.yaml --weight_path weights/ryolov4/best.pth --batch_size 8 --img_size 608

Tensorboard

If you would like to use tensorboard for tracking traing process.

  • Open additional terminal in the same folder where you are running program.
  • Run command $ tensorboard --logdir='weights/your_model_name' --port=6006
  • Go to http://localhost:6006/

Results

UCAS_AOD

Method Plane Car mAP
YOLOv4 (smoothL1-iou) 98.05 92.05 95.05

car

plane

DOTA

DOTA have not been tested yet. (It's quite difficult to test because of large resolution of images) DOTADOTA

trash (custom dataset)

Method Tetra Pak Aluminum Can mAP
YOLOv4 (smoothL1-iou) 100.00 100.00 100.00

garbage1

garbage2

References

WongKinYiu/yolov7
ultralytics/yolov5
Tianxiaomo/pytorch-YOLOv4
yangxue0827/RotationDetection
eriklindernoren/PyTorch-YOLOv3

YOLOv4: Optimal Speed and Accuracy of Object Detection

@article{yolov4,
  title={YOLOv4: Optimal Speed and Accuracy of Object Detection},
  author={Alexey Bochkovskiy, Chien-Yao Wang, Hong-Yuan Mark Liao},
  journal = {arXiv},
  year={2020}
}

R3Det: Refined Single-Stage Detector with Feature Refinement for Rotating Object

@article{r3det,
  title={R3Det: Refined Single-Stage Detector with Feature Refinement for Rotating Object},
  author={Xue Yang, Junchi Yan, Ziming Feng, Tao He},
  journal = {arXiv},
  year={2019}
}