• Stars
    star
    139
  • Rank 262,954 (Top 6 %)
  • Language
    Python
  • Created over 4 years ago
  • Updated over 2 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

YOLOv5 in TensorRT

yolov5-tensorrt

port pytorch/onnx yolov5 model to run on a Jetson Nano

ipynb is for testing pytorch code and exporting onnx models using Google Colab

python code runs numpy/tensorrt implementation on Jetson Nano

├── python
│   ├── lib
|       ├── demo.py
|       ├── Processor.py
|       ├── Visualizer.py
|       ├── classes.py
|       └── models
|           ├── yolov5s-simple-32.trt
|           ├── yolov5s-simple-16.trt
|           └── yolov5s-simple.onnx
│   └── export_tensorrt.py
  • convert yolov5 onnx model to tensorrt
  • pre-process image
  • run inference against input using tensorrt engine
  • post process output (forward pass)
  • apply nms thresholding on candidate boxes
  • visualize results

compile onnx model to trt

python3 export_tensorrt.py --help
usage: export_tensorrt.py [-h] [-m MODEL] [-fp FLOATINGPOINT] [-o OUTPUT]

compile Onnx model to TensorRT

optional arguments:
  -h, --help            show this help message and exit
  -m MODEL, --model MODEL
                        onnx file location inside ./lib/models
  -fp FLOATINGPOINT, --floatingpoint FLOATINGPOINT
                        floating point precision. 16 or 32
  -o OUTPUT, --output OUTPUT
                        name of trt output file

run demo

python3 demo.py --image=./path/to/image.jpg --model=./path/to/model.trt

performance

for now, only testing initial inference performance
nms, and post processing are slow rn
model fp precision input size time (ms)
small-simple 32 640x640x3 221 ms
small-simple 16 640x640x3 ?

object probability