Creating a Human Pose Estimation Application with NVIDIA DeepStream

Hi Pouss06,

Correct, using the tracker could help in the detection accuracy. You need to select the tracker configuration parameters accordingly. Check DS document for config parameters: Gst-nvtracker — DeepStream 5.1 Release documentation

Hi Jesperlyng,

You are right about the model output dimension. The postprocessing code expect the dimensions in CHW. You will need to update the post processing for Densenet and ResNet.

Hi @mjhuria

I’ve changed
deepstream_pose_estimation_app.cpp:
parse_objects_from_tensor_meta: init of cmap_data, cmap_dims / paf_data, paf_dims switched around because the order of heatmap/paf layers is reversed between the pose model and Densenet/Resnet.

post_process.cpp:
find_peaks() and refine_peaks() - all references to cmap_dims
paf_score_graph() - all references to paf_dims

I also updated the config to

net-scale-factor=1
offsets=103.939;116.779;123.68

which I got from searching for the correct values to use for Densenet/Resnet models.

But still the result looks strange with bodypart hits occurring all over the screen, even with just one person in the frame.

I’m about to give this up!
If you actually made this work with the Densenet/Resnet models it would be kind of you to share the code that’s needed to run those models.

Thanks in advance

Hi Jesperlyng,

Could you please share the updates you made and files to reproduce this issue?

how can i change the input source from .H264 to webcam.

you can refer to this code to change the input source redaction_with_deepstream/deepstream_redaction_app.c at de907fc6aa1ea874689d85052f8d8c25d0b49960 · NVIDIA-AI-IOT/redaction_with_deepstream · GitHub

Hi
I can use the pose_estimation.oxx to get the correct result.but when i use my own model’s .onnx there has an error below:
0:00:00.813945366 24705 0x5599042de4f0 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1701> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream_pose_estimation/human_pose.onnx_b1_gpu0_fp16.engine
INFO: …/nvdsinfer/nvdsinfer_model_builder.cpp:685 [Implicit Engine Info]: layers num: 4
0 INPUT kFLOAT input 3x224x224
1 OUTPUT kFLOAT part_affinity_fields 56x56x42
2 OUTPUT kFLOAT heatmap 56x56x18
3 OUTPUT kFLOAT maxpool_heatmap 56x56x18

0:00:00.813998830 24705 0x5599042de4f0 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1805> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream_pose_estimation/human_pose.onnx_b1_gpu0_fp16.engine
0:00:00.816529003 24705 0x5599042de4f0 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus: [UID 1]: Load new model:deepstream_pose_estimation_config.txt sucessfully
Running…
libnvosd (603):(ERROR) : Out of bound radius
0:00:01.830818022 24705 0x55990421da30 WARN nvinfer gstnvinfer.cpp:1975:gst_nvinfer_output_loop: error: Internal data stream error.
ERROR from element nv-onscreendisplay: Unable to draw circles
0:00:01.830826599 24705 0x55990421da30 WARN nvinfer gstnvinfer.cpp:1975:gst_nvinfer_output_loop: error: streaming stopped, reason error (-5)
Error details: gstnvdsosd.c(558): gst_nvds_osd_transform_ip (): /GstPipeline:deepstream-tensorrt-openpose-pipeline/GstNvDsOsd:nv-onscreendisplay
Returned, stopping playback
Deleting pipeline
how can i fixed it?thx

Hi @mjhuria , please find the files here: GitHub - JesperLyng/deepstream_pose_estimation

Changes are in this commit: Changes to accomodate HWC style models instead of CHW · JesperLyng/deepstream_pose_estimation@e57fa85 · GitHub

@178504060 Did you follow Step 3: Replace the OSD library in the DeepStream install directory ?

yes,i replaced them. some models can work well .but others can’t

Hi @178504060 , sorry I wasn’t paying proper attention to your post.

I don’t know why you get the out of bound error, but it looks like you ran into the same problem as me where the models you try to use are ordered with the channel last (HWC) instead of first (CHW) like the original onnx model.

0 INPUT kFLOAT input 3x224x224
1 OUTPUT kFLOAT part_affinity_fields **56x56x42**
2 OUTPUT kFLOAT heatmap **56x56x18**
3 OUTPUT kFLOAT maxpool_heatmap **56x56x18**

Please check my post above.
I also posted a link to a repo where I tried to fix the problem in the code, but that still didn’t work for me.

1 Like

I’m not getting the .engine file from the script. This is my output.

0:00:01.897351749 16234 0x5583a848f0 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1716> [UID = 1]: Trying to create engine from model files

Input filename: /opt/nvidia/deepstream/deepstream-5.1/sources/apps/sample_apps/deepstream_pose_estimation/pose_estimation.onnx
ONNX IR version: 0.0.4
Opset version: 7
Producer name: pytorch
Producer version: 1.3
Domain:
Model version: 0
Doc string:

WARNING: [TRT]: onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
Killed

Any helppppp!! Thanks