Post processing for Bodyposenet tensorrt model with native python script

Hi @Morganh

I converted the etlt model to tensorrt engine with tao converter and have written a python script to infer bodyposenet model and i am getting the output layer as they mentioned here
BodyPoseNet | NVIDIA NGC. I am getting two output tensors which is confidence map and part affinity field. After this what is postprocessing steps to get the keypoint coordinates.

• Hardware - GeForce GTX 1660
• Network Type - bpnet
• CUDA Version - 11.4
• Tensorrt Version - 8.0.1.6

This is the script i am using for inference
bpnetinfernce.py (5.2 KB)

Officially, TAO provides “tao bpnet inference xxx” for running inference.
https://docs.nvidia.com/tao/tao-toolkit/text/bodypose_estimation/bodyposenet.html#run-inference-on-the-model

And also deepstream_tao_apps/apps/tao_others/deepstream-bodypose2d-app at master · NVIDIA-AI-IOT/deepstream_tao_apps · GitHub

Thanks @Morganh , But Inside deepstream there is one shared object file called "libnvcv_bodypose2d.so " , I think, in deepstream-bodypose2d-app they calling postprocessing functions from this so file only. So can we directly use this libnvcv_bodypose2d.so file in python.

You can try to load it. More, you can also refer to trt_pose/parse_objects.py at master · NVIDIA-AI-IOT/trt_pose · GitHub
BTW, actually we are planning enable bpnet in triton-app. The post processing will be exposed then.

Sure @Morganh , I loaded so file with this script, but don’t know how to call the functions from that
from ctypes import CDLL
slibc = ‘libnvcv_bodypose2d.so’
hlibc = CDLL(slibc)

If you have information, pls help me on this

Temporally, please leverage trt_pose/parse_objects.py at master · NVIDIA-AI-IOT/trt_pose · GitHub for the postprocessing.

Yes I passed cmap and paf output to trt pose post processing, But i am getting wrong output

This is my input image

Thisis cmap and paf input shape
torch.Size([1, 36, 48, 19]) , torch.Size([1, 144, 192, 38])

This is the output i am getting from trt pose pose processing

How about running with deepstream_tao_apps/apps/tao_others/deepstream-bodypose2d-app at master · NVIDIA-AI-IOT/deepstream_tao_apps · GitHub ? Is the output correct?

Yes when I run the same image and same model in deepstream its working correctly

This is the output i am getting from deepstream app

OK, please debug your python code further. Officially, we are planning to add bpnet inference in triton app which will expose postprocessing in python.

Sure @Morganh i will debug

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.