RGB/BGR input format specified but network input channels is not 3

My application has primary and one secondary gies.

Secondary gie’s input format is n,h,w,c. Has to be that way because of some layers. I can run in TensorRT with this n,h,w,c format successfully.

In deepstream, my config file for secondary gie is as follow.

[ property]
gpu-id=0
net-scale-factor=0.00392157
onnx-file=…/…/…/…/samples/models/platerect/numplate_recg_nhwc_removed_sparsetodense.onnx
model-engine-file=…/…/…/…/samples/models/platerect/numplate_recg_nhwc_removed_sparsetodense.onnx_b1_gpu0_fp16.engine
#mean-file=…/…/…/…/samples/models/Secondary_CarColor/mean.ppm
labelfile-path=…/…/…/…/samples/models/platerect/labels.txt
#int8-calib-file=…/…/…/…/samples/models/Secondary_CarColor/cal_trt.bin
infer-dims=24;94;3
force-implicit-batch-dim=0
#batch-size=10
# 0=FP32 and 1=INT8 mode
network-mode=2
input-object-min-width=94
input-object-min-height=24
input-object-max-width=94
input-object-max-height=24
process-mode=2
model-color-format=0
gpu-id=0
gie-unique-id=2
operate-on-gie-id=1
operate-on-class-ids=2
num-detected-classes=48
custom-lib-path=/usr/src/tensorrt/CTCGreedyDecoder_Plugin/build/libCTCGreedyDecoder.so
output-blob-names=d_predictions:0

[class-attrs-2]
threshold=0

I can create engine successfully. The error is [UID = 2]: RGB/BGR input format specified but network input channels is not 3.

The whole message is as follows.

xavier@xavier-desktop:/opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-LicensePlate$ ./deepstream-test2-app file:///opt/nvidia/deepstream/deepstream-5.0/samples/streams/IMG_5715.MOV
With tracker
Warn: 'threshold' parameter has been deprecated. Use 'pre-cluster-threshold' instead.
Now playing: file:///opt/nvidia/deepstream/deepstream-5.0/samples/streams/IMG_5715.MOV

Using winsys: x11 
Opening in BLOCKING MODE 
0:00:03.056920216 22769   0x558e742230 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<secondary1-nvinference-engine> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1701> [UID = 2]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-5.0/samples/models/platerect/numplate_recg_nhwc_removed_sparsetodense.onnx_b1_gpu0_fp16.engine
INFO: [Implicit Engine Info]: layers num: 2
0   INPUT  kFLOAT input:0         24x94x3         
1   OUTPUT kFLOAT d_predictions:0 20              

0:00:03.057107521 22769   0x558e742230 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<secondary1-nvinference-engine> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1805> [UID = 2]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-5.0/samples/models/platerect/numplate_recg_nhwc_removed_sparsetodense.onnx_b1_gpu0_fp16.engine
0:00:03.057144450 22769   0x558e742230 ERROR                nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger:<secondary1-nvinference-engine> NvDsInferContext[UID 2]: Error in NvDsInferContextImpl::preparePreprocess() <nvdsinfer_context_impl.cpp:874> [UID = 2]: RGB/BGR input format specified but network input channels is not 3
ERROR: Infer Context prepare preprocessing resource failed., nvinfer error:NVDSINFER_CONFIG_FAILED
0:00:03.059904609 22769   0x558e742230 WARN                 nvinfer gstnvinfer.cpp:809:gst_nvinfer_start:<secondary1-nvinference-engine> error: Failed to create NvDsInferContext instance
0:00:03.059996229 22769   0x558e742230 WARN                 nvinfer gstnvinfer.cpp:809:gst_nvinfer_start:<secondary1-nvinference-engine> error: Config file path: dstest2_sgie1_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Running...
ERROR from element secondary1-nvinference-engine: Failed to create NvDsInferContext instance
Error details: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(809): gst_nvinfer_start (): /GstPipeline:dstest2-pipeline/GstNvInfer:secondary1-nvinference-engine:
Config file path: dstest2_sgie1_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Returned, stopping playback
Deleting pipeline

What is wrong in config file?

My onnx model is nhwc. So dynamic batch. The one tested successfully at TensorRT is 10hwc, fixed batch size.
But here, I like to test dynamic batch. Since pgie’s number of detection is not fixed, I like to have dynamic batch in secondary gie. Network input shape is -1,24,94,3. Is it possible to make a dynamic implementation like this?

I found that the error is inside this method.
NvDsInferContextImpl::preparePreprocess(const NvDsInferContextInitParams& initParams){

}
primary pgie has correct m_NetworkInfo.height , m_NetworkInfo.width and m_NetworkInfo.channels.
secondary pgie has issue. So I need to swap height, width and channels for secondary pgie.
Secondary model’s onnx has input shape -1,24,94,3.
Inside config file, it is set as infer-dims=24;94;3.
But don’t know how it is read and all are in out of order as
m_NetworkInfo.channels 24 m_NetworkInfo.width 3 m_NetworkInfo.height 94.
So I need to swap to have correct order.
Now error is gone.
But not sure, there are some other fixes still need for this.

I hope this helps to other who faced same issues as I did.