• Stars
    star
    797
  • Rank 57,151 (Top 2 %)
  • Language
    Python
  • Created over 3 years ago
  • Updated 10 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

โšก Based on Yolo's low-power, ultra-lightweight universal target detection algorithm, the parameter is only 250k, and the speed of the smart phone mobile terminal can reach ~300fps+

FastestDet: it has higher accuracy and faster speed than Yolo-fastest https://github.com/dog-qiuqiu/FastestDet

โšกYolo-FastestV2โšกDOI

image

  • Simple, fast, compact, easy to transplant
  • Less resource occupation, excellent single-core performance, lower power consumption
  • Faster and smaller:Trade 0.3% loss of accuracy for 30% increase in inference speed, reducing the amount of parameters by 25%
  • Fast training speed, low computing power requirements, training only requires 3GB video memory, gtx1660ti training COCO 1 epoch only takes 4 minutes
  • ็ฎ—ๆณ•ไป‹็ป๏ผšhttps://zhuanlan.zhihu.com/p/400474142 ไบคๆตqq็พค:1062122604

Evaluating indicator/Benchmark

Network COCO mAP(0.5) Resolution Run Time(4xCore) Run Time(1xCore) FLOPs(G) Params(M)
Yolo-FastestV2 24.10 % 352X352 3.29 ms 5.37 ms 0.212 0.25M
Yolo-FastestV1.1 24.40 % 320X320 4.23 ms 7.54 ms 0.252 0.35M
Yolov4-Tiny 40.2% 416X416 26.00ms 55.44ms 6.9 5.77M
  • Test platform Mate 30 Kirin 990 CPU๏ผŒBased on NCNN

Improvement

  • Different loss weights for different scale output layers
  • The backbone is replaced with a more lightweight shufflenetV2
  • Anchor matching mechanism and loss are replaced by YoloV5, and the classification loss is replaced by softmax cross entropy from sigmoid
  • Decouple the detection head, distinguish obj (foreground background classification), cls (category classification), reg (detection frame regression) 3 branches,

How to use

Dependent installation

  • PIP
pip3 install -r requirements.txt

Test

  • Picture test
    python3 test.py --data data/coco.data --weights modelzoo/coco2017-0.241078ap-model.pth --img img/000139.jpg
    
/>

How to train

Building data sets(The dataset is constructed in the same way as darknet yolo)

  • The format of the data set is the same as that of Darknet Yolo, Each image corresponds to a .txt label file. The label format is also based on Darknet Yolo's data set label format: "category cx cy wh", where category is the category subscript, cx, cy are the coordinates of the center point of the normalized label box, and w, h are the normalized label box The width and height, .txt label file content example as follows:

    11 0.344192634561 0.611 0.416430594901 0.262
    14 0.509915014164 0.51 0.974504249292 0.972
    
  • The image and its corresponding label file have the same name and are stored in the same directory. The data file structure is as follows:

    .
    โ”œโ”€โ”€ train
    โ”‚ย ย  โ”œโ”€โ”€ 000001.jpg
    โ”‚ย ย  โ”œโ”€โ”€ 000001.txt
    โ”‚ย ย  โ”œโ”€โ”€ 000002.jpg
    โ”‚ย ย  โ”œโ”€โ”€ 000002.txt
    โ”‚ย ย  โ”œโ”€โ”€ 000003.jpg
    โ”‚ย ย  โ””โ”€โ”€ 000003.txt
    โ””โ”€โ”€ val
        โ”œโ”€โ”€ 000043.jpg
        โ”œโ”€โ”€ 000043.txt
        โ”œโ”€โ”€ 000057.jpg
        โ”œโ”€โ”€ 000057.txt
        โ”œโ”€โ”€ 000070.jpg
        โ””โ”€โ”€ 000070.txt
    
  • Generate a dataset path .txt file, the example content is as follows๏ผš

    train.txt

    /home/qiuqiu/Desktop/dataset/train/000001.jpg
    /home/qiuqiu/Desktop/dataset/train/000002.jpg
    /home/qiuqiu/Desktop/dataset/train/000003.jpg
    

    val.txt

    /home/qiuqiu/Desktop/dataset/val/000070.jpg
    /home/qiuqiu/Desktop/dataset/val/000043.jpg
    /home/qiuqiu/Desktop/dataset/val/000057.jpg
    
  • Generate the .names category label file, the sample content is as follows:

    category.names

    person
    bicycle
    car
    motorbike
    ...
    
    
  • The directory structure of the finally constructed training data set is as follows:

    .
    โ”œโ”€โ”€ category.names        # .names category label file
    โ”œโ”€โ”€ train                 # train dataset
    โ”‚   โ”œโ”€โ”€ 000001.jpg
    โ”‚ย ย  โ”œโ”€โ”€ 000001.txt
    โ”‚ย ย  โ”œโ”€โ”€ 000002.jpg
    โ”‚ย ย  โ”œโ”€โ”€ 000002.txt
    โ”‚ย ย  โ”œโ”€โ”€ 000003.jpg
    โ”‚ย ย  โ””โ”€โ”€ 000003.txt
    โ”œโ”€โ”€ train.txt              # train dataset path .txt file
    โ”œโ”€โ”€ val                    # val dataset
    โ”‚ย ย  โ”œโ”€โ”€ 000043.jpg
    โ”‚ย ย  โ”œโ”€โ”€ 000043.txt
    โ”‚ย ย  โ”œโ”€โ”€ 000057.jpg
    โ”‚ย ย  โ”œโ”€โ”€ 000057.txt
    โ”‚ย ย  โ”œโ”€โ”€ 000070.jpg
    โ”‚ย ย  โ””โ”€โ”€ 000070.txt
    โ””โ”€โ”€ val.txt                # val dataset path .txt file
    
    

Get anchor bias

  • Generate anchor based on current dataset
    python3 genanchors.py --traintxt ./train.txt
    
  • The anchors6.txt file will be generated in the current directory,the sample content of the anchors6.txt is as follows:
    12.64,19.39, 37.88,51.48, 55.71,138.31, 126.91,78.23, 131.57,214.55, 279.92,258.87  # anchor bias
    0.636158                                                                             # iou
    

Build the training .data configuration file

  • Reference./data/coco.data
    [name]
    model_name=coco           # model name
    
    [train-configure]
    epochs=300                # train epichs
    steps=150,250             # Declining learning rate steps
    batch_size=64             # batch size
    subdivisions=1            # Same as the subdivisions of the darknet cfg file
    learning_rate=0.001       # learning rate
    
    [model-configure]
    pre_weights=None          # The path to load the model, if it is none, then restart the training
    classes=80                # Number of detection categories
    width=352                 # The width of the model input image
    height=352                # The height of the model input image
    anchor_num=3              # anchor num
    anchors=12.64,19.39, 37.88,51.48, 55.71,138.31, 126.91,78.23, 131.57,214.55, 279.92,258.87 #anchor bias
    
    [data-configure]
    train=/media/qiuqiu/D/coco/train2017.txt   # train dataset path .txt file
    val=/media/qiuqiu/D/coco/val2017.txt       # val dataset path .txt file 
    names=./data/coco.names                    # .names category label file
    

Train

  • Perform training tasks
    python3 train.py --data data/coco.data
    

Evaluation

  • Calculate map evaluation
    python3 evaluation.py --data data/coco.data --weights modelzoo/coco2017-0.241078ap-model.pth
    

Deploy

NCNN

  • Convert onnx
    python3 pytorch2onnx.py --data data/coco.data --weights modelzoo/coco2017-0.241078ap-model.pth --output yolo-fastestv2.onnx
    
  • onnx-sim
    python3 -m onnxsim yolo-fastestv2.onnx yolo-fastestv2-opt.onnx
    
  • Build NCNN
    git clone https://github.com/Tencent/ncnn.git
    cd ncnn
    mkdir build
    cd build
    cmake ..
    make
    make install
    cp -rf ./ncnn/build/install/* ~/Yolo-FastestV2/sample/ncnn
    
  • Covert ncnn param and bin
    cd ncnn/build/tools/onnx
    ./onnx2ncnn yolo-fastestv2-opt.onnx yolo-fastestv2.param yolo-fastestv2.bin
    cp yolo-fastestv2* ../
    cd ../
    ./ncnnoptimize yolo-fastestv2.param yolo-fastestv2.bin yolo-fastestv2-opt.param yolo-fastestv2-opt.bin 1
    cp yolo-fastestv2-opt* ~/Yolo-FastestV2/sample/ncnn/model
    
  • run sample
    cd ~/Yolo-FastestV2/sample/ncnn
    sh build.sh
    ./demo
    

Reference