NvInfer: How to change input to HWC for ONNX models?

I created to converted my custom model to ONNX and now try to use it with NvInfer plugin in Deepstream.

I’m using deepstream-test-1 sample application for testing.

I’ve attached config file for nvinfer plugin.

dstest1_pgie_config.txt (3.6 KB)
• Hardware Platform : Jetson Nano
• DeepStream Version : 5.0
• JetPack Version : 4.4DP
• TensorRT Version : 7.0

    0:00:04.476397255 21074   0x559618b2a0 INFO                 nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1577> [UID = 1]: deserialized trt engine from :/home/Username/Downloads/model_custom.onnx_b1_gpu0_fp16.engine
    INFO: [Implicit Engine Info]: layers num: 8
    0   INPUT  kFLOAT input_1:0       218x1025x3
    1   OUTPUT kFLOAT dense_7/Softmax:0 33
    2   OUTPUT kFLOAT dense_6/Softmax:0 33
    3   OUTPUT kFLOAT dense_5/Softmax:0 33
    4   OUTPUT kFLOAT dense_4/Softmax:0 33
    5   OUTPUT kFLOAT dense_3/Softmax:0 33
    6   OUTPUT kFLOAT dense_2/Softmax:0 33
    7   OUTPUT kFLOAT dense_1/Softmax:0 33

0:00:04.476670490 21074   0x559618b2a0 INFO                 nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1681> [UID = 1]: Use deserialized engine model: /home/Username/Downloads/model_custom.onnx_b1_gpu0_fp16.engine
0:00:04.476719449 21074   0x559618b2a0 ERROR                nvinfer gstnvinfer.cpp:596:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::preparePreprocess() <nvdsinfer_context_impl.cpp:783> [UID = 1]: RGB/BGR input format specified but network input channels is not 3
ERROR: Infer Context prepare preprocessing resource failed., nvinfer error:NVDSINFER_CONFIG_FAILED
0:00:04.478723917 21074   0x559618b2a0 WARN                 nvinfer gstnvinfer.cpp:781:gst_nvinfer_start:<primary-nvinference-engine> error: Failed to create NvDsInferContext instance
0:00:04.478784803 21074   0x559618b2a0 WARN                 nvinfer gstnvinfer.cpp:781:gst_nvinfer_start:<primary-nvinference-engine> error: Config file path: dstest1_pgie_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Running...
ERROR from element primary-nvinference-engine: Failed to create NvDsInferContext instance
Error details: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(781): gst_nvinfer_start (): /GstPipeline:dstest1-pipeline/GstNvInfer:primary-nvinference-engine:
Config file path: dstest1_pgie_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Returned, stopping playback
Deleting pipeline

So, by default nvinfer assumes input to be in CHW but in this model it is in HWC format. So it thinks there are 218 channels and hence give the error.

Is there any way to give input type for ONNX model, like we have for uff-input-dims for UFF ?

Kind of a solution: While converting the model to ONNX you can use
–inputs-as-nchw to solve this problem.

1 Like

what is n in this case?

batch size