Description
Hi there:
I’m new to tensorRT and i’m working on transfer google’s efficient-det tensorflow model (automl/efficientdet at master · google/automl · GitHub) to tensorRT engine so we can deploy that in Deepstream framework on JetsonNano.
Basiclly efficient-det tensorflow model is composed by 2 parts:
- Pre-processing and inference (output: class confidence and anchor-based bounding-box offset prediction of 5 different scales feature map)
- Post-processing written in TensorFlow 1.x api which can be transfer to tensorflow graphdef. Post-process pipeline is described below:
merge all output into one tensor
select topK confidence result
decode class-label and bbox coordinate from the TopK result
doing NMS for each class
Here is some problems I’ve encountered:
- I’ve successfully transfer first part of the efficient-det model to trt engine by tf2onnx as middleman
- when i try to convert Post-processing part of efficient-det to trt engine,there are some ops and datatype tf2onnx cannot support.
- If I want to reimplement the Postprocessing part and insert it into deepstream pipeline ,should i write it as a TensorRT plugin or as a deepstream gie custom parser? Is there any guild for that?
Besides, due to the significant improvement of Efficientdet on both speed and accuracy (39.6map on coco with 6 gflops), it’s invaluable if nvidia could release a deepstream pipeline sample for it like yolov3 or ssd, is there any plan for that?
Environment
TensorRT Version: 6.0.1
GPU Type: JetsonNano
Nvidia Driver Version: shipped with jetpack
CUDA Version: shipped with jetpack
CUDNN Version: shipped with jetpack
Operating System + Version: shipped with jetpack
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):