Unable to parse custom pytorch UNET onnx model with python deepstream-segmentation-app

• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version 6.1 Triton Container
• TensorRT Version 8.2.5-1
• NVIDIA GPU Driver Version (valid for GPU only) 510.47.03
• Issue Type( questions, new requirements, bugs) Questions
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)

python3 deepstream_segmentation.py seg_onnx.txt …/…/…/…/streams/sample_720p.mjpeg output

  • I am using existing deepstream-seg app with custom pytorch unet onnx model.
  • Here is my pgie config file
[property]
gpu-id=0
net-scale-factor=1.0
model-color-format=0
onnx-file=/app/unet_seg/unet_converted_1280x1918.onnx
batch-size=2
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=0
num-detected-classes=2
interval=0
gie-unique-id=1
network-type=2
output-blob-names=outc
segmentation-threshold=0.0


[class-attrs-all]
roi-top-offset=0
roi-bottom-offset=0
detected-min-w=0
detected-min-h=0
detected-max-w=0
detected-max-h=0

getting error while nvifer parsing this model:

Starting pipeline 

0:00:03.323590595   463      0x6146750 INFO                 nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: onnx2trt_utils.cpp:366: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
WARNING: [TRT]: onnx2trt_utils.cpp:392: One or more weights outside the range of INT32 was clamped
ERROR: [TRT]: [shuffleNode.cpp::symbolicExecute::387] Error Code 4: Internal Error (Reshape_75: IShuffleLayer applied to shape tensor must have 0 or 1 reshape dimensions: dimensions were [-1,2])
ERROR: [TRT]: ModelImporter.cpp:773: While parsing node number 86 [Pad -> "onnx::Concat_218"]:
ERROR: [TRT]: ModelImporter.cpp:774: --- Begin node ---
ERROR: [TRT]: ModelImporter.cpp:775: input: "x1"
input: "onnx::Pad_216"
input: "onnx::Pad_217"
output: "onnx::Concat_218"
name: "Pad_86"
op_type: "Pad"
attribute {
  name: "mode"
  s: "constant"
  type: STRING
}

ERROR: [TRT]: ModelImporter.cpp:776: --- End node ---
ERROR: [TRT]: ModelImporter.cpp:779: ERROR: ModelImporter.cpp:179 In function parseGraph:
[6] Invalid Node - Pad_86
[shuffleNode.cpp::symbolicExecute::387] Error Code 4: Internal Error (Reshape_75: IShuffleLayer applied to shape tensor must have 0 or 1 reshape dimensions: dimensions were [-1,2])
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:315 Failed to parse onnx file
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:966 failed to build network since parsing model errors.
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:799 failed to build network.
0:00:31.772357377   463      0x6146750 ERROR                nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1934> [UID = 1]: build engine file failed
0:00:31.796943142   463      0x6146750 ERROR                nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2020> [UID = 1]: build backend context failed
0:00:31.796998851   463      0x6146750 ERROR                nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1257> [UID = 1]: generate backend failed, check config file settings
0:00:31.797042630   463      0x6146750 WARN                 nvinfer gstnvinfer.cpp:846:gst_nvinfer_start:<primary-nvinference-engine> error: Failed to create NvDsInferContext instance
0:00:31.797135561   463      0x6146750 WARN                 nvinfer gstnvinfer.cpp:846:gst_nvinfer_start:<primary-nvinference-engine> error: Config file path: /app/unet_seg/seg_onnx.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Error: gst-resource-error-quark: Failed to create NvDsInferContext instance (1): gstnvinfer.cpp(846): gst_nvinfer_start (): /GstPipeline:pipeline0/GstNvInfer:primary-nvinference-engine:
Config file path: /app/unet_seg/seg_onnx.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED

Where did you get the onnx model? Can you provide the model?

We have trained on custom dataset on this network
GitHub - milesial/Pytorch-UNet: PyTorch implementation of the U-Net for image semantic segmentation with high quality images.

Netron Graph:

Seems some operation is not supported by TRT. Please refer to TensorRT forum.

@Fiona.Chen
onnx: unet_converted_1280x1918.onnx - Google Drive

This is pytorch UNet-onnx.
How the PeopleSemSegNet i.e. also UNet and format is encrypted ONNX works?
Both might be same UNet network

Can you use our unet model? There is sample for it. NVIDIA-AI-IOT/deepstream_tao_apps: Sample apps to demonstrate how to deploy models trained with TAO on DeepStream (github.com)

I want to use custom unet onnx model provided above.

So you need to go to TenorRT forum for help.

Hello @zorin, Seems this topic cannot moved to TensorRT forum directly, so we need close this topic. If you still have questions, please create a topic specific for tensorRT forum : TensorRT - NVIDIA Developer Forums
Thank you.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.