How to run human_pose_estimation by deepstream-app?

• Hardware Platform (Jetson / GPU) NX
• DeepStream Version 5.0.1
**• JetPack Version (valid for Jetson only)**4.4
• TensorRT Version7.1.3.1

Hello,I want to use the deepstream-app method to run human-post-estimation.

GitHub - NVIDIA-AI-IOT/deepstream_pose_estimation: This is a sample DeepStream application to demonstrate a human pose estimation pipeline.

This is my configuration file.
ds_config.txt (3.6 KB)
deepstream_pose_estimation_config.txt (2.0 KB)

How should I add the parsing function to the deepstream_pose_estimation_config.txt file, and what else needs to be modified to make it work?

Hi,

You can find the parser in deepstream_pose_estimation_app.cpp directly:

Thanks.

I want to use this method deepstream-app -c ds_config.txt .
Do I need to modify the original parsing function to generate a .so file? Do other modules, such as osd, metadata, etc., also need to be modified? Can you give me some detailed suggestions?

Thanks evry much !

Hi,

The parser doesn’t include into the deepstream-app by default.
You can wrapping it into a library like libnvdsinfer_custom_impl_ssd.so in objectDetector_SSD example.

Thanks.

Thanks for your reply. I’ll try it.