Could not find output coverage layer for parsing objects

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) Jetson Xavier
• DeepStream Version 6.1.1
• JetPack Version (valid for Jetson only) 5.0.2
• TensorRT Version Not sure. I’m using the deepstream-l4t:6.1.1-triton image
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs) Question
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) Run the deepstream-rtsp-in-rtsp-out.py example app using Triton with a yolov7 ONNX-> TensorRT model
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

I’m trying to run the example deepstream-rtsp-in-rtsp-out.py example app inside the deepstream-l4t:6.1.1-triton image container using the Triton server with the Yolov7 model converted to the TensorRT format. I get the error below with the tensor_order = TENSOR_ORDER_LINEAR. Is it possible to run custom code within the python app for the custom_parse_bbox_func? Are there any examples anywhere or docs on how to do this?

ERROR: infer_postprocess.cpp:599 Could not find output coverage layer for parsing objects
ERROR: infer_postprocess.cpp:1054 Failed to parse bboxes
ERROR: infer_postprocess.cpp:383 detection parsing output tensor data failed, uid:5, nvinfer error:NVDSINFER_OUTPUT_PARSING_FAILED
ERROR: infer_postprocess.cpp:270 Infer context initialize inference info failed, nvinfer error:NVDSINFER_OUTPUT_PARSING_FAILED

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

please refer to yoloV7 onnx triton inference - #3 by fanzh

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.