ONNX ir_version 0.0.5, TensorRT support 0.0.3?

Hi, I got an issue while using deepstream for inference 2 usb cameras.

My model is Faster R-CNN Resnet 101, which is trained from TensorFlow.

For using this with TensorRT, I converted pb to onnx using tensorflow-onnx github with command below at x64 server.

python -m tf2onnx.convert \
--input faster_rcnn_resnet101_coco_2018_01_28/frozen_inference_graph.pb \
--output faster_rcnn_resnet101_coco_output.onnx \
--inputs image_tensor:0 \
--outputs detection_boxes:0,detection_scores:0,num_detections:0,detection_classes:0 \
--custom-ops Round,Where,Add,Upsample,ResizeBilinear,CropAndResize \
--opset 10

And I tried to use onnx file in Jetson TX2 Jetpack 4.2.1 with below config.

[property]
gpu-id=0
net-scale-factor=1
#0=RGB, 1=BGR
model-color-format=0
onnx-file=faster_rcnn_resnet101.onnx
model-engine-file=model_b1_fp32.engine
labelfile-path=labels.txt
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=0
num-detected-classes=80
gie-unique-id=1
is-classifier=0

And TensorRT puts error when leading onnx file.

----------------------------------------------------------------
Input filename:   /home/nvidia/faster_rcnn_resnet101.onnx
ONNX IR version:  0.0.5
Opset version:    10
Producer name:    tf2onnx
Producer version: 1.6.0
Domain:           
Model version:    0
Doc string:       
----------------------------------------------------------------
WARNING: ONNX model has a newer ir_version (0.0.5) than this parser was built against (0.0.3).
Unsupported ONNX data type: UINT8 (2)
ERROR: ModelImporter.cpp:54 In function importInput:
[8] Assertion failed: convert_dtype(onnx_tensor_type.elem_type(), &trt_dtype)
0:00:03.435524036 23663   0x7f300022d0 ERROR                nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:generateTRTModel(): Failed to parse onnx file
0:00:03.486172901 23663   0x7f300022d0 ERROR                nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:initialize(): Failed to create engine from model files
0:00:03.486328324 23663   0x7f300022d0 WARN                 nvinfer gstnvinfer.cpp:692:gst_nvinfer_start:<primary_gie_classifier> error: Failed to create NvDsInferContext instance
0:00:03.486371619 23663   0x7f300022d0 WARN                 nvinfer gstnvinfer.cpp:692:gst_nvinfer_start:<primary_gie_classifier> error: Config file path: /home/nvidia/config_infer_primary_yoloV3_tiny.txt, NvDsInfer Error: NVDSINFER_TENSORRT_ERROR
** ERROR: <main:651>: Failed to set pipeline to PAUSED
Quitting
ERROR from primary_gie_classifier: Failed to create NvDsInferContext instance
Debug info: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(692): gst_nvinfer_start (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie_classifier:
Config file path: /home/nvidia/config_infer_primary_yoloV3_tiny.txt, NvDsInfer Error: NVDSINFER_TENSORRT_ERROR
App run failed

It looks like TensorRT doesn`t support onnx IR version 0.0.5 and opset 10.

How can I run faster rcnn resnet 101 tensorflow trained file for video inference using deepstream?

Thanks.

Hi,

I see the same issue with a custom-model trained using Tensorflow 2.0.
tf2onnx program uses IR version of 0.0.5 & the TensorRT expects 0.0.3 IR version. I used --opset 9.
I have seen that TensorRT support onnx opset 9! Its weird that the IR version is now another issue. Anyone else managed to convert TF2.0 model & deploy on TX2 using onnx?

P.S. I am able to get it working via TF-TRT pipeline.