Error in NvDsInferContextImpl::parseBoundingBox()

First I trained a dssd-resnet50 object detection model on nvcr.io/nvidia/tlt-streamanalytics:v3.0-dp-py3
and then I used tao-converter to generate the engine file, but when I used deepstream-app to run When this file is used, deepstream-app reports an error

NvMMLiteOpen : Block : BlockType = 4
                                                                                    
===== NVMEDIA: NVENC =====                                                                                              
NvMMLiteBlockCreate : Block : BlockType = 4                                                                                                                                                                                                     **PERF:  FPS 0 (Avg)    FPS 1 (Avg)                                                                                     **PERF:  0.00 (0.00)    0.00 (0.00)                                                                                     **PERF:  0.00 (0.00)    0.00 (0.00)                                                                                     **PERF:  0.00 (0.00)    0.00 (0.00)                                                                                     **PERF:  0.00 (0.00)    0.00 (0.00)                                                                                     **PERF:  0.00 (0.00)    0.00 (0.00)                                                                                     
**PERF:  0.00 (0.00)    0.00 (0.00)                                                                                 
**PERF:  0.00 (0.00)    0.00 (0.00)                                                                                     
0:00:40.034610171 29666      0x16c92a0 ERROR  nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: 
Error in NvDsInferContextImpl::parseBoundingBox() <nvdsinfer_context_impl_output_parsing.cpp:59> [UID = 1]: Could not find output coverage layer for parsing objects                                                   
0:00:40.034675482 29666      0x16c92a0 ERROR                nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: 
Error in NvDsInferContextImpl::fillDetectionOutput() <nvdsinfer_context_impl_output_parsing.cpp:735> [UID = 1]: Failed to parse bboxes              

Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
• The pipeline being used

• Hardware Platform Jetson
• DeepStream Version 6.1
• JetPack Version (valid for Jetson only) 5.1
• TensorRT Version 8.4
• NVIDIA GPU Driver Version (valid for GPU only) cudnn 8.2.3.49
• Issue Type questions
• How to reproduce the issue ? First I trained a dssd-resnet50 object detection model on nvcr.io/nvidia/tlt-streamanalytics:v3.0-dp-py3
and then I used tao-converter to generate the engine file, but when I used deepstream-app to run When this file is used, deepstream-app reports an error

dssd_config_infer_primary.txt (3.4 KB)
dssd_Engine.txt (4.6 KB)

Please use dssd config file in deepstream_tao_apps/configs/dssd_tao at master · NVIDIA-AI-IOT/deepstream_tao_apps · GitHub

Thank you, the problem has been solved