There may be problems in post processing of OpenPose outputs because DS5.0 does not officially support post processing of OpenPose.
Maybe you have to customize the output parser function specifically for this model.
You can refer to customized output parser functions in following directories:
Here is how a typical parser function (call back function) looks like:
extern "C" bool NvDsInferParseCustomOpenPose(
std::vector<NvDsInferLayerInfo> const& outputLayersInfo,
NvDsInferNetworkInfo const& networkInfo,
NvDsInferParseDetectionParams const& detectionParams,
// Add your customized post processing code here
// May be you also have to implement drawings of pose lines by yourself (using OpenCV?)
And add the corresponding config items in config file:
Do not care much about the parameter name “
parse-bbox-func-name”, it is just an entrance pointing to your parser function.
This is an official NVIDIA OSS repository here https://github.com/NVIDIA-AI-IOT/trt_pose to help you run open pose in standalone mode that is independent of DeepStream. Perhaps this repository is useful for you because you can easily analyse or debug outputs of open pose model.
Hopefully my suggestions are helpful for you.