Failing to load onnx model using nvinfer in Gst element Python

• Hardware Platform (Jetson / GPU) Jetson TX2
• DeepStream Version 5
• JetPack Version (valid for Jetson only) 4.4
• TensorRT Version7.1.3

I am trying to use my model in Gst pipeline by creating nvinfer element. I read from the blog that nvinfer instance only supports .caffe, .uff and .onnx models for inference and for onnx models we should just pass the model file. I have a trained ssd_mobilenet_v2 frozen graph in tensorflow, so I converted it to onnx from .pb file using tf2onnx with below versions in colab.

I also tried using saved_model and checkpoint format in generating .onnx file but still I face the same issue.

tf2onnx=1.7.0/165071
onnx : 1.7
tensorflow : 1.15.2
opset : 11

After creating the nvinfer element, I am setting properties using a .txt file as below. I am not sure if the format is correct.

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-file=ssd_model/vg.onnx
labelfile-path=ssd_model/labels.txt
force-implicit-batch-dim=1
batch-size=1
network-mode=2
num-detected-classes=12
interval=0
gie-unique-id=1
#scaling-filter=0
#scaling-compute-hw=0

[class-attrs-all]
pre-cluster-threshold=0.2
eps=0.2
group-threshold=1

After adding all the created elements to the pipeline, when run, I am getting the errors as below.

Creating Pipeline

Creating Source

Creating H264Parser

Creating Decoder

Creating EGLSink

Playing file /home/cortex0001/samples/streams/test_video.h264
Adding elements to Pipeline

Linking elements in the Pipeline

Starting pipeline

Using winsys: x11
Opening in BLOCKING MODE
0:00:00.325905056 16779 0x1d1d82d0 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1715> [UID = 1]: Trying to create engine from model files
ERROR: failed to build network since there is no model file matched.
ERROR: failed to build network.
0:00:01.420661856 16779 0x1d1d82d0 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1735> [UID = 1]: build engine file failed
0:00:01.420713408 16779 0x1d1d82d0 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1821> [UID = 1]: build backend context failed
0:00:01.420740864 16779 0x1d1d82d0 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1148> [UID = 1]: generate backend failed, check config file settings
0:00:01.420778496 16779 0x1d1d82d0 WARN nvinfer gstnvinfer.cpp:809:gst_nvinfer_start: error: Failed to create NvDsInferContext instance
0:00:01.420797280 16779 0x1d1d82d0 WARN nvinfer gstnvinfer.cpp:809:gst_nvinfer_start: error: Config file path: onnx_veh_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Warning: gst-library-error-quark: Rounding muxer output height to the next multiple of 4: 272 (5): /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvmultistream/gstnvstreammux.c(2307): gst_nvstreammux_change_state (): /GstPipeline:pipeline0/GstNvStreamMux:Stream-muxer
Error: gst-resource-error-quark: Failed to create NvDsInferContext instance (1): /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(809): gst_nvinfer_start (): /GstPipeline:pipeline0/GstNvInfer:primary-inference:
Config file path: onnx_veh_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED

I tried converting .onnx to tensorrt engine file but facing the below issue.

Loading ONNX file from path ssd_model/vg.onnx…
Beginning ONNX file parsing
Unsupported ONNX data type: UINT8 (2)
ERROR: Failed to parse the ONNX file.
In node -1 (importInput): UNSUPPORTED_NODE: Assertion failed: convertDtype(onnxDtype.elem_type(), &trtDtype)

Note I was able to run the model using titron server but it was very slow, so I switched to this approach and hence tried the above things.

If possible, can someone provide a reference how we can use onnx in nvinfer in Python.
Any help will be appreciated!

this need to be “onnx-file=ssd_model/vg.onnx”,

please refer to

  1. doc - https://docs.nvidia.com/metropolis/deepstream/dev-guide/index.html#page/DeepStream%20Plugins%20Development%20Guide/deepstream_plugin_details.html
  2. source code about the error - " ERROR: failed to build network since there is no model file matched." : /opt/nvidia/deepstream/deepstream-5.0/sources/libs/nvdsinfer/nvdsinfer_model_builder.cpp
1 Like