deepstream report fail with onnx

ONNX IR version: 0.0.6
Opset version: 11
Producer name: tf2onnx
Producer version: 1.6.0
Domain:
Model version: 0
Doc string:

when opset version is 9, it is ok, and engine file can create, but when opset version is 11, it report, the deepstream version is 4.0.1, do you konw how to support opset version=11. thanks.

Hi,

TensorRT 6.0, which Deepstream built with, only support opset-10.
https://github.com/onnx/onnx-tensorrt/blob/6.0/operators.md

May I know what kind of error do you meet?
You will need to wait for Deepstream support the TensorRT 7.0 for the opset-11 support.
https://github.com/onnx/onnx-tensorrt/blob/7.0/operators.md

Thanks.

While parsing node number 1 [Conv]:
ERROR: ModelImporter.cpp:288 In function importModel:
[5] Assertion failed: tensors.count(input_name)
0:00:02.646132709 13140 0x222bfad0 ERROR nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:generateTRTModel(): Failed to parse onnx file
0:00:02.651235056 13140 0x222bfad0 ERROR nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:initialize(): Failed to create engine from model files
0:00:02.651893194 13140 0x222bfad0 WARN nvinfer gstnvinfer.cpp:692:gst_nvinfer_start:<primary_gie_classifier> error: Failed to create NvDsInferContext instance
0:00:02.651950505 13140 0x222bfad0 WARN nvinfer gstnvinfer.cpp:692:gst_nvinfer_start:<primary_gie_classifier> error: Config file path: /home/edgeai/deepstream_sdk_v4.0.1_jetson/sources/objectDetector_Yolo/config_infer_custom_yolo.txt, NvDsInfer Error: NVDSINFER_TENSORRT_ERROR
0:00:02.652615010 13140 0x222bfad0 WARN GST_PADS gstpad.c:1149:gst_pad_set_active:<primary_gie_classifier:sink> Failed to activate pad

Hi, I have the same issue. Any idea by when DeepStream will support TensorRT 7.0? For info, my logs below. Could you please confirm that this is due to opset and nothing else? In any case, I just converted using torch.onnx.export.

*** DeepStream: Launched RTSP Streaming at rtsp://localhost:8554/ds-test ***

(deepstream-app:3871): GLib-GObject-WARNING **: 21:21:40.605: g_object_set_is_valid_property: object class 'nvv4l2h264enc' has no property named 'bufapi-version'
Creating LL OSD context new
0:00:00.609181645  3871 0x55a6d161caf0 WARN                 nvinfer gstnvinfer.cpp:515:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:useEngineFile(): Failed to read from model engine file
0:00:00.609223278  3871 0x55a6d161caf0 INFO                 nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:initialize(): Trying to create engine from model files
----------------------------------------------------------------
Input filename:   /home/chandrachud_basavaraj/frcnn-onnx/faster_rcnn_r101_fpn.onnx
ONNX IR version:  0.0.6
Opset version:    11
Producer name:    pytorch
Producer version: 1.4
Domain:           
Model version:    0
Doc string:       
----------------------------------------------------------------
WARNING: ONNX model has a newer ir_version (0.0.6) than this parser was built against (0.0.3).
While parsing node number 1 [Sub]:
ERROR: builtin_op_importers.cpp:276 In function combineTensorsElementwise:
[8] Assertion failed: weights.shape.d[BATCH_DIM] == 1
0:00:01.200522463  3871 0x55a6d161caf0 ERROR                nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:generateTRTModel(): Failed to parse onnx file
0:00:01.203812934  3871 0x55a6d161caf0 ERROR                nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:initialize(): Failed to create engine from model files
0:00:01.203877375  3871 0x55a6d161caf0 WARN                 nvinfer gstnvinfer.cpp:692:gst_nvinfer_start:<primary_gie_classifier> error: Failed to create NvDsInferContext instance
0:00:01.203889466  3871 0x55a6d161caf0 WARN                 nvinfer gstnvinfer.cpp:692:gst_nvinfer_start:<primary_gie_classifier> error: Config file path: /home/chandrachud_basavaraj/frcnn-onnx/config_infer_frcnn.txt, NvDsInfer Error: NVDSINFER_TENSORRT_ERROR
** ERROR: <main:651>: Failed to set pipeline to PAUSED
Quitting
ERROR from primary_gie_classifier: Failed to create NvDsInferContext instance
Debug info: gstnvinfer.cpp(692): gst_nvinfer_start (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie_classifier:
Config file path: /home/chandrachud_basavaraj/frcnn-onnx/config_infer_frcnn.txt, NvDsInfer Error: NVDSINFER_TENSORRT_ERROR
App run failed

Hi,

Sorry that we cannot disclosure our future plan here.
We will let you know when Deepstream support TRT v7.0.

Thanks.