Hyperpose (ONNX models) import in deepstream error

• DeepStream Version 5.0 GA
**• NVIDIA GPU Driver Version (valid for GPU only)**450

I am trying to implement in deepstream the human posture detection model provided by hyperpose (GitHub - tensorlayer/hyperpose: Library for Fast and Flexible Human Pose Estimation), the models come in onnx format (hyperpose/download-openpose-coco-model.sh at master · tensorlayer/hyperpose · GitHub), and I get the following output:

root@01bd5fd0616b:/opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app# deepstream-app -c source1_usb_dec_infer_resnet_int8.txt
0:00:02.822833456 55 0x7fcda8002230 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1701> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-5.0/samples/models/hyperpose/openpose-coco-V2-HW-368x656.onnx_b1_gpu0_fp32.engine
INFO: …/nvdsinfer/nvdsinfer_model_builder.cpp:685 [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT import/input_image:0 3x368x656
1 OUTPUT kFLOAT output_paf:0 38x46x82
2 OUTPUT kFLOAT output_conf:0 19x46x82

0:00:02.822944571 55 0x7fcda8002230 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1805> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-5.0/samples/models/hyperpose/openpose-coco-V2-HW-368x656.onnx_b1_gpu0_fp32.engine
0:00:02.823829524 55 0x7fcda8002230 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app/config_infer_primary_jiro.txt sucessfully

Runtime commands:
h: Print this help
q: Quit

p: Pause
r: Resume

NOTE: To expand a source in the 2D tiled display and view object details, left-click on the source.
To go back to the tiled display, right-click anywhere on the window.

** INFO: <bus_callback:181>: Pipeline ready

** INFO: <bus_callback:167>: Pipeline running

0:00:03.827816626 55 0x55a4ad5c9e30 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::parseBoundingBox() <nvdsinfer_context_impl_output_parsing.cpp:59> [UID = 1]: Could not find output coverage layer for parsing objects
0:00:03.827835089 55 0x55a4ad5c9e30 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::fillDetectionOutput() <nvdsinfer_context_impl_output_parsing.cpp:734> [UID = 1]: Failed to parse bboxes

I’m using the following configuration to specify the model, and as you can see in the output, the engine is generated correctly, but it fails later, I do not know what I need to implement or configure so that deepstream can run without problems, I would be grateful if you could guide me.
Greetings

config_infer_primary_pose.txt (3.0 KB)

I’m afraid current deepstream does not support to post-process the output of human posture model, did you implement your own post-process model?

1 Like