I converted the etlt model to tensorrt engine with tao converter and have written a python script to infer bodyposenet model and i am getting the output layer as they mentioned here BodyPoseNet | NVIDIA NGC. I am getting two output tensors which is confidence map and part affinity field. After this what is postprocessing steps to get the keypoint coordinates.
• Hardware - GeForce GTX 1660
• Network Type - bpnet
• CUDA Version - 11.4
• Tensorrt Version - 8.0.1.6
This is the script i am using for inference bpnetinfernce.py (5.2 KB)
Thanks @Morganh , But Inside deepstream there is one shared object file called "libnvcv_bodypose2d.so " , I think, in deepstream-bodypose2d-app they calling postprocessing functions from this so file only. So can we directly use this libnvcv_bodypose2d.so file in python.
Sure @Morganh , I loaded so file with this script, but don’t know how to call the functions from that
from ctypes import CDLL
slibc = ‘libnvcv_bodypose2d.so’
hlibc = CDLL(slibc)
OK, please debug your python code further. Officially, we are planning to add bpnet inference in triton app which will expose postprocessing in python.