Deepstream : Error faced in Back To Back Detector

• Hardware Platform (Jetson / GPU) dGPU aws T4
• DeepStream Version 5.0
• JetPack Version (valid for Jetson only)
• TensorRT Version 7
• NVIDIA GPU Driver Version (valid for GPU only) 440.82

I have converted a model from keras-> onnx -> trt .
I am using this model as a Secondary Detector.

My Primary Detector is Yolo.

When I am running this as Back-To-Back Detector, I am facing this error :

root@6a4f50ec8943:/opt/nvidia/deepstream/deepstream-5.0/sources/b-t-b-custom# deepstream-app -c deepstream_app_config_yoloV4.txt
WARNING: …/nvdsinfer/nvdsinfer_func_utils.cpp:34 [TRT]: Current optimization profile is: 0. Please ensure there are no enqueued operations pending in this context prior to switching profiles
0:00:02.081130836 1382 0x556fd87eb010 INFO nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1577> [UID = 2]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-5.0/sources/b-t-b-custom/model0401b.engine
INFO: …/nvdsinfer/nvdsinfer_model_builder.cpp:685 [Implicit Engine Info]: layers num: 2
0 INPUT kFLOAT input 480x768x3
1 OUTPUT kFLOAT concatenate_1 30x48x8

0:00:02.081229129 1382 0x556fd87eb010 INFO nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1681> [UID = 2]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-5.0/sources/b-t-b-custom/model0401b.engine
0:00:02.081254080 1382 0x556fd87eb010 ERROR nvinfer gstnvinfer.cpp:596:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 2]: Error in NvDsInferContextImpl::preparePreprocess() <nvdsinfer_context_impl.cpp:783> [UID = 2]: RGB/BGR input format specified but network input channels is not 3
ERROR: nvdsinfer_context_impl.cpp:1033 Infer Context prepare preprocessing resource failed., nvinfer error:NVDSINFER_CONFIG_FAILED
0:00:02.084288551 1382 0x556fd87eb010 WARN nvinfer gstnvinfer.cpp:781:gst_nvinfer_start:<secondary_gie_0> error: Failed to create NvDsInferContext instance
0:00:02.084309294 1382 0x556fd87eb010 WARN nvinfer gstnvinfer.cpp:781:gst_nvinfer_start:<secondary_gie_0> error: Config file path: /opt/nvidia/deepstream/deepstream-5.0/sources/b-t-b-custom/config_infer_secondary_detector.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
** ERROR: main:651: Failed to set pipeline to PAUSED
Quitting
ERROR from secondary_gie_0: Failed to create NvDsInferContext instance
Debug info: gstnvinfer.cpp(781): gst_nvinfer_start (): /GstPipeline:pipeline/GstBin:secondary_gie_bin/GstNvInfer:secondary_gie_0:
Config file path: /opt/nvidia/deepstream/deepstream-5.0/sources/b-t-b-custom/config_infer_secondary_detector.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
App run failed

How should I solve this error?
Any tips/suggestions/feedback?

Can you upload your config files?

yes, Please find them attached here,
deepstream_app_config_yoloV4.txt (4.2 KB) config_infer_secondary_detector.txt (2.3 KB) config_infer_primary_yoloV4.txt (3.1 KB)

The keras compatibility is not validated with deepstream, Deepstream may not support it.

I feel that ,
by default nvinfer assumes input to be in CHW [channels, height, width] but
my input is in HWC [height, width, channels] format.
So the error occurs :
ERROR nvinfer gstnvinfer.cpp:596:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 2]: Error in NvDsInferContextImpl::preparePreprocess() <nvdsinfer_context_impl.cpp:783> [UID = 2]: RGB/BGR input format specified but network input channels is not 3

Is this the reason?
How to make the changes? I want nvinfer to take HWC format

@pinktree3

I am not familiar with keras much, but I think you can add a transpose (CHW -> HWC) operation for the input of keras model. And then convert the keras model including this transpose operation into ONNX.

okay, I will try it. Thanks