• Hardware Platform (Jetson / GPU) dGPU aws T4
• DeepStream Version 5.0
• JetPack Version (valid for Jetson only)
• TensorRT Version 7
• NVIDIA GPU Driver Version (valid for GPU only) 440.82
I have converted a model from keras-> onnx -> trt .
I am using this model as a Secondary Detector.
My Primary Detector is Yolo.
When I am running this as Back-To-Back Detector, I am facing this error :
root@6a4f50ec8943:/opt/nvidia/deepstream/deepstream-5.0/sources/b-t-b-custom# deepstream-app -c deepstream_app_config_yoloV4.txt
WARNING: …/nvdsinfer/nvdsinfer_func_utils.cpp:34 [TRT]: Current optimization profile is: 0. Please ensure there are no enqueued operations pending in this context prior to switching profiles
0:00:02.081130836 1382 0x556fd87eb010 INFO nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1577> [UID = 2]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-5.0/sources/b-t-b-custom/model0401b.engine
INFO: …/nvdsinfer/nvdsinfer_model_builder.cpp:685 [Implicit Engine Info]: layers num: 2
0 INPUT kFLOAT input 480x768x3
1 OUTPUT kFLOAT concatenate_1 30x48x8
0:00:02.081229129 1382 0x556fd87eb010 INFO nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1681> [UID = 2]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-5.0/sources/b-t-b-custom/model0401b.engine
0:00:02.081254080 1382 0x556fd87eb010 ERROR nvinfer gstnvinfer.cpp:596:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 2]: Error in NvDsInferContextImpl::preparePreprocess() <nvdsinfer_context_impl.cpp:783> [UID = 2]: RGB/BGR input format specified but network input channels is not 3
ERROR: nvdsinfer_context_impl.cpp:1033 Infer Context prepare preprocessing resource failed., nvinfer error:NVDSINFER_CONFIG_FAILED
0:00:02.084288551 1382 0x556fd87eb010 WARN nvinfer gstnvinfer.cpp:781:gst_nvinfer_start:<secondary_gie_0> error: Failed to create NvDsInferContext instance
0:00:02.084309294 1382 0x556fd87eb010 WARN nvinfer gstnvinfer.cpp:781:gst_nvinfer_start:<secondary_gie_0> error: Config file path: /opt/nvidia/deepstream/deepstream-5.0/sources/b-t-b-custom/config_infer_secondary_detector.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
** ERROR: main:651: Failed to set pipeline to PAUSED
Quitting
ERROR from secondary_gie_0: Failed to create NvDsInferContext instance
Debug info: gstnvinfer.cpp(781): gst_nvinfer_start (): /GstPipeline:pipeline/GstBin:secondary_gie_bin/GstNvInfer:secondary_gie_0:
Config file path: /opt/nvidia/deepstream/deepstream-5.0/sources/b-t-b-custom/config_infer_secondary_detector.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
App run failed
How should I solve this error?
Any tips/suggestions/feedback?