Convert efficient det to tensorRT engine

Description

Hi there:
I’m new to tensorRT and i’m working on transfer google’s efficient-det tensorflow model (https://github.com/google/automl/tree/master/efficientdet) to tensorRT engine so we can deploy that in Deepstream framework on JetsonNano.
Basiclly efficient-det tensorflow model is composed by 2 parts:

  1. Pre-processing and inference (output: class confidence and anchor-based bounding-box offset prediction of 5 different scales feature map)
  2. Post-processing written in TensorFlow 1.x api which can be transfer to tensorflow graphdef. Post-process pipeline is described below:
    merge all output into one tensor
    select topK confidence result
    decode class-label and bbox coordinate from the TopK result
    doing NMS for each class

Here is some problems I’ve encountered:

  • I’ve successfully transfer first part of the efficient-det model to trt engine by tf2onnx as middleman
  • when i try to convert Post-processing part of efficient-det to trt engine,there are some ops and datatype tf2onnx cannot support.
  • If I want to reimplement the Postprocessing part and insert it into deepstream pipeline ,should i write it as a TensorRT plugin or as a deepstream gie custom parser? Is there any guild for that?

Besides, due to the significant improvement of Efficientdet on both speed and accuracy (39.6map on coco with 6 gflops), it’s invaluable if nvidia could release a deepstream pipeline sample for it like yolov3 or ssd, is there any plan for that?

Hi,
Thanks for your suggestion! We will consider it, but don’t have the plan for now.

I want to reimplement the Postprocessing part and insert it into deepstream pipeline ,should i write it as a TensorRT plugin or as a deepstream gie custom parser? Is there any guild for that?
I think the whole “Post-process” should build as one TensorRT instance and run it by nvinfer DeepStream plugin.
To build “Post-process” as one TensorRT instance, you need to implement the TensoRT unsupported ops as TensorRT plugins.

Let me know if above informantion is enough.

1 Like

And, you could implement it either as a plugin or as a postprocessor and both will work. If there are any compute intensive operations, it would be advantageous to write a TRT plugin since it will have access to entire tensor batch on the gpu. If the operations are not compute intensive, then he can use the post-processor. The postprocessing function runs on the CPU and is called once per image and not the entire batch at once

1 Like