• Stars
    star
    458
  • Rank 95,591 (Top 2 %)
  • Language
    C++
  • Created almost 5 years ago
  • Updated over 3 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

TensorRT ONNX Plugin、Inference、Compile

这个版本已经废弃,最新版本,请移步

YoloV5 Support

http://zifuture.com:1556/fs/16.std/release_tensorRT_yolov5.zip

TensorRT-Integrate

  1. Support pytorch onnx plugin(DCN、HSwish ... etc.)
  2. Simpler inference and plugin APIs

Re-implement

CenterNet : ctdet_coco_dla_2x

image1


CenterTrack: coco_tracking

coco.tracking.jpg


DBFace

selfie.draw.jpg

Use TensorRT-Integrate

install protobuf == 3.11.4 (or >= 3.8.x, But it's more troublesome)

bash scripts/getALL.sh
make run -j32

Inference Code

auto engine = TRTInfer::loadEngine("models/efficientnet-b0.fp32.trtmodel");
float mean[3] = {0.485, 0.456, 0.406};
float std[3] = {0.229, 0.224, 0.225};
Mat image = imread("img.jpg");
auto input = engine->input();

// multi batch sample
input->resize(2);
input->setNormMatGPU(0, image, mean, std);
input->setNormMatGPU(1, image, mean, std);

engine->forward();

// get result and copy to cpu
engine->output(0)->cpu<float>();
engine->tensor("hm")->cpu<float>();

Environment


Plugin

  1. Pytorch export ONNX: plugin_onnx_export.py
  2. MReLU.cuHSwish.cuDCNv2.cu