Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) → dGPU aws T4
• DeepStream Version → 5.0
• TensorRT Version → 7
• NVIDIA GPU Driver Version (valid for GPU only) → 440.82
I am trying to run inference on RetinaFace but it shows me this error. I have a custom layer written to decode the output of retinaface which is included in the model engine file. Successfully able to deserialize the engine file and run on h264 stream but cannot get any output:
Now playing: ../../../samples/streams/sample_720p.h264
WARNING: ../nvdsinfer/nvdsinfer_func_utils.cpp:34 [TRT]: Current optimization profile is: 0. Please ensure there are no enqueued operations pending in this context prior to switching profiles
0:00:02.700166702 120 0x55f409fa04d0 INFO nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1577> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-5.0/sources/apps/deepstream-retinaface-multistream/tensorrt_engines_awsT4/retina_r50.engine
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:685 [Implicit Engine Info]: layers num: 2
0 INPUT kFLOAT data 3x640x1088
1 OUTPUT kFLOAT prob 428401x1x1
0:00:02.700248097 120 0x55f409fa04d0 INFO nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1681> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-5.0/sources/apps/deepstream-retinaface-multistream/tensorrt_engines_awsT4/retina_r50.engine
0:00:02.702319985 120 0x55f409fa04d0 INFO nvinfer gstnvinfer_impl.cpp:311:notifyLoadModelStatus:<primary-nvinference-engine> [UID 1]: Load new model:retinaface_pgie_config.txt sucessfully
Running...
0:00:02.851189263 120 0x55f409f9ac50 ERROR nvinfer gstnvinfer.cpp:596:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::parseBoundingBox() <nvdsinfer_context_impl_output_parsing.cpp:59> [UID = 1]: Could not find output coverage layer for parsing objects
0:00:02.851215394 120 0x55f409f9ac50 ERROR nvinfer gstnvinfer.cpp:596:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::fillDetectionOutput() <nvdsinfer_context_impl_output_parsing.cpp:573> [UID = 1]: Failed to parse bboxes
I have uploaded the app here on drive in case you want to try to reproduce the error with a README