StrongSORT with OSNet for YoloV5 and YoloV7 (Counter)
# Official YOLOv5 # Official YOLOv7
Implementation of paper - YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors
Introduction
This repository contains a highly configurable two-stage-tracker that adjusts to different deployment scenarios. The detections generated by YOLOv5 and YOLOv7, a family of object detection architectures and models pretrained on the COCO dataset, are passed to StrongSORT which combines motion and appearance information based on OSNet in order to tracks the objects. It can track any object that your Yolov5 model was trained to detect.
Before you run the tracker
- Clone the repository recursively:
git clone --recurse-submodules https://github.com/bharath5673/StrongSORT-YOLO.git
If you already cloned and forgot to use --recurse-submodules
you can run git submodule update --init
- Make sure that you fulfill all the requirements: Python 3.8 or later with all requirements.txt dependencies installed, including torch>=1.7. To install, run:
pip install -r requirements.txt
Tracking sources
Tracking can be run on most video formats
Select object detectors and ReID model
Yolov5
There is a clear trade-off between model inference speed and accuracy. In order to make it possible to fulfill your inference speed/accuracy needs you can select a Yolov5 family model for automatic download
$ python track_v5.py --source 0 --yolo-weights weights/yolov5n.pt --img 640
yolov5s.pt
yolov5m.pt
yolov5l.pt
yolov5x.pt --img 1280
...
Yolov7
There is a clear trade-off between model inference speed and accuracy. In order to make it possible to fulfill your inference speed/accuracy needs you can select a Yolov5 family model for automatic download
$ python track_v7.py --source 0 --yolo-weights weights/yolov7-tiny.pt --img 640
yolov7.pt
yolov7x.pt
yolov7-w6.pt
yolov7-e6.pt
yolov7-d6.pt
yolov7-e6e.pt
...
StrongSORT
The above applies to StrongSORT models as well. Choose a ReID model based on your needs from this ReID model zoo
$ python track_v*.py --source 0 --strong-sort-weights osnet_x0_25_market1501.pt
osnet_x0_5_market1501.pt
osnet_x0_75_msmt17.pt
osnet_x1_0_msmt17.pt
...
Filter tracked classes
By default the tracker tracks all MS COCO classes.
If you only want to track persons I recommend you to get these weights for increased performance
python track_v*.py --source 0 --yolo-weights weights/v*.pt --classes 0 # tracks persons, only
If you want to track a subset of the MS COCO classes, add their corresponding index after the classes flag
python track_v*.py --source 0 --yolo-weights weights/v*.pt --classes 16 17 # tracks cats and dogs, only
Counter
get realtime counts of every tracking objects without any rois or any line interctions
$ python track_v*.py --source test.mp4 -yolo-weights weights/v*.pt --save-txt --count --show-vid
Draw Object Trajectory
$ python track_v*.py --source test.mp4 -yolo-weights weights/v*.pt --save-txt --count --show-vid --draw
Here is a list of all the possible objects that a Yolov5 model trained on MS COCO can detect. Notice that the indexing for the classes in this repo starts at zero.
MOT compliant results
Can be saved to your experiment folder runs/track/<yolo_model>_<deep_sort_model>/
by
python track_v*.py --source ... --save-txt
Cite
If you find this project useful in your research, please consider cite:
@misc{yolov5-strongsort-osnet-2022,
title={Real-time multi-camera multi-object tracker using YOLOv5 and StrongSORT with OSNet},
author={Mikel Broström},
howpublished = {\url{https://github.com/mikel-brostrom/Yolov5_StrongSORT_OSNet}},
year={2022}
}
@article{wang2022yolov7,
title={{YOLOv7}: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors},
author={Wang, Chien-Yao and Bochkovskiy, Alexey and Liao, Hong-Yuan Mark},
journal={arXiv preprint arXiv:2207.02696},
year={2022}
}
Acknowledgements
Expand
- https://github.com/AlexeyAB/darknet
- https://github.com/WongKinYiu/yolor
- https://github.com/WongKinYiu/PyTorch_YOLOv4
- https://github.com/WongKinYiu/ScaledYOLOv4
- https://github.com/Megvii-BaseDetection/YOLOX
- https://github.com/ultralytics/yolov3
- https://github.com/ultralytics/yolov5
- https://github.com/DingXiaoH/RepVGG
- https://github.com/JUGGHM/OREPA_CVPR2022
- https://github.com/TexasInstruments/edgeai-yolov5/tree/yolo-pose