TensorRT-CenterNet
demo (GT 1070)
- ctdet_coco_dla_2x
- centerface
- cthelmet
Performance
model | input_size | GPU | mode | inference Time |
---|---|---|---|---|
mobilenetv2 | 512x512 | gtx 1070 | float32 | 3.798ms |
mobilenetv2 | 512x512 | gtx 1070 | int8 | 1.75ms |
mobilenetv2 | 512x512 | jetson TX2 | float16 | 22ms |
dla34 | 512x512 | gtx 1070 | float32 | 24ms |
dla34 | 512x512 | gtx 1070 | int8 | 19.6ms |
dla34 | 512x512 | jetson TX2 | fp32 | 209ms |
dla34 | 512x512 | jetson TX2 | fp16 | 186ms |
dla34v0 | 512x512 | gtx 1070 | float32 | 12.6ms |
dla34v0 | 512x512 | gtx 1070 | int8 | 6.76ms |
dla34v0 | 512x512 | jetson TX2 | fp32 | 114ms |
dla34v0 | 512x512 | jetson TX2 | fp16 | 80ms |
resdcn101 | 512x512 | gtx 1070 | float32 | 20.9ms |
resdcn18 | 512x512 | gtx 1070 | float32 | 5.81ms |
resdcn18 | 512x512 | gtx 1070 | int8 | 3.63ms |
resdcn18 | 512x512 | jetson TX2 | fp32 | 54ms |
resdcn18 | 512x512 | jetson TX2 | fp16 | 41ms |
- support Deform Conv v2.
- no nms.
- support fp32 fp16 int8 mode.
Eval Result
model | GPU | mode | APtrt/APpaper | APtrt50 | APtrt75 | APtrtS | APtrtM | APtrtL |
---|---|---|---|---|---|---|---|---|
ctdet_coco_dla_2x | gtx 1070 | float32 | 0.365/0.374 | 0.543 | 0.390 | 0.164 | 0.398 | 0.536 |
ctdet_coco_dlav0_1x | gtx 1070 | float32 | 0.324/-- | 0.511 | 0.343 | 0.140 | 0.350 | 0.476 |
ctdet_coco_dlav0_1x | gtx 1070 | int8 | 0.295/-- | 0.468 | 0.311 | 0.123 | 0.318 | 0.446 |
ctdet_coco_resdcn101 | gtx 1070 | float32 | 0.332/0.346 | 0.516 | 0.349 | 0.115 | 0.367 | 0.531 |
ctdet_coco_resdcn18 | gtx 1070 | float32 | 0.277/0.281 | 0.448 | 0.286 | 0.083 | 0.290 | 0.454 |
ctdet_coco_resdcn18 | gtx 1070 | int8 | 0.242/0.281 | 0.401 | 0.250 | 0.061 | 0.255 | 0.409 |
notes
- cocoval2017 test AP with no augmentation.
- input_szie = 512x512
- thresh = 0.01
- maxpool kernel_size = 3
- calib_img_list.txt : random sample 700 images from COCO2017/val2017
Enviroments
- gtx 1070
pytorch 1.0-1.1
ubuntu 1604
TensorRT 5.0
onnx-tensorrt v5.0
cuda 9.0
- jetson TX2
jetpack 4.2
Models
- Convert CenterNet model to onnx. See here for details.
- Use netron to observe whether the output of the converted onnx model is (hm, reg, wh)
Example
git clone https://github.com/CaoWGG/TensorRT-CenterNet.git
cd TensorRT-CenterNet
mkdir build
cd build && cmake .. && make
cd ..
##ctdet | config include/ctdetConfig.h
## float32
./buildEngine -i model/ctdet_coco_dla_2x.onnx -o model/ctdet_coco_dla_2x.engine
./runDet -e model/ctdet_coco_dla_2x.engine -i test.jpg -c test.h264
##cthelmet | config include/ctdetConfig.h
## flaot32
./buildEngine -i model/ctdet_helmet.onnx -o model/ctdet_helmet.engine -m 0
./runDet -e model/ctdet_helmet.engine -i test.jpg -c test.h264
## int8
./buildEngine -i model/ctdet_helmet.onnx -o model/ctdet_helmet.engine -m 2 -c calib_img_list.txt
./runDet -e model/ctdet_helmet.engine -i test.jpg -c test.h264
##centerface | config include/ctdetConfig.h
./buildEngine -i model/centerface.onnx -o model/centerface.engine
./runDet -e model/centerface.engine -i test.jpg -c test.h264
## run eval_coco.py | conifg your cocodaset and ctdet_coco engine
python3 eval_coco.py model/ctdet_coco_dla_2x.engine