TensorFlow-Yolov3 to ONNX to trt engine

Description

Hello, I have trained a custom Tensorflow-Yolov3 model.
How it is custom?
The custom means it’s not standard Yolov3 model, it’s for the two inputs (visual image and infrared image), then perform feature extraction and feature fusion and finally person object detection. My project is based on This Yolov3 TensorFlow implementation.

Now I can train, test, and use models in my system. I have both .ckpt and .pb weights. My ultimate task is to use these models in Xavier NX. I have tested the model in Xavier NX and it’s about 80% slower in NX. So I want to convert these model to trt engine and then use in Xavier NX using TensorRT. I have tried for several days but still not succeed.

There are some examples but they all use yolov3.weights and yolov3.cfg, I trained in TensorFlow and my model structure is different so I can not use cfg or weights.

So my questions are:

1- What steps I should follow to convert .ckpt or .pb weights to .trt ?
2- How I can utilize TensorRT examples for my problem as they are based on yolov3 .weights and .cfg?
3- Finally How I can use this converted .trt model for interface and real-time demo or deployment?

Environment

TensorRT Version: TensorRT-5.0.2.6
GPU Type: Xavier NX
Nvidia Driver Version: Not sure
CUDA Version: 10.2
CUDNN Version: 8.0
Operating System + Version:
Python Version (if applicable): 3.6
TensorFlow Version (if applicable): 1.15
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Hi @asadjavedgujar
Can you try pb → onnx → TRT workflow?

For any unsupported layer create custom plugin using onnx graphsurgeon.

Thanks

Hi, Thank you for your guidelines. I am trying to convert ONNX to TRT using onnx-tensorrt project but I got the following error.

onnx2trt modelIn/model.onnx -o modelOut/model.trt

Input filename: modelIn/model.onnx
ONNX IR version: 0.0.6
Opset version: 11
Producer name: tf2onnx
Producer version: 1.6.3
Domain:
Model version: 0
Doc string:

Parsing model
While parsing node number 1 [Conv → “lwir_darknet/conv0/batch_normalization/FusedBatchNormV3:0”]:
ERROR: /home/littro/onnx-tensorrt/ModelImporter.cpp:537 In function importModel:
[5] Assertion failed: tensors.count(input_name)

Hi @asadjavedgujar
Request you to share your ONNX model.
Alternatively, you can try trtexec command to generate trt engine from onnx model
https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/trtexec
Thanks!

Hi Sir , @AakankshaS
I have uploaded both models , one is official TensorFlow-yolov3.pb converted to model-coco.onnx and the second is my customs TensorFlow-yolov3.pb model converted to model-asad.onnx.

you can download from the links.

Both have the same error.
I am trying in Xavier NX with TensorRT 7

1 Like

Hi @asadjavedgujar ,

Apologies for the delay,
Are you still facing the issue?