Deploy openpose on DS5.0

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) 1050Ti
• DeepStream Version 5.0
• JetPack Version (valid for Jetson only)
• TensorRT Version 7.0
• NVIDIA GPU Driver Version (valid for GPU only) 440.100

Hi, we try to deploy openpose on DS5.0 and we followed https://github.com/cding-nv/deepstream-openpose. But error occured when we make (libs/nvinfer):
nvdsinfer_context_impl.cpp:1808:46: error: ‘NvDsInferUffInputOrder_kNHWC’ was not declared in this scope

We noticed that this program based on DS4.0 and we are not sure whether it have version conflict about DS4.0 and 5.0.

@yohoohhh

How did you copy and compile source code of https://github.com/cding-nv/deepstream-openpose into DS5.0’s working directories?

I find there is a duplicate directory libs/nvdsinfer that contains source code of the old nvdsinfer plugin from DS4.0.

libs/nvdsinfer already exists in DS5.0

We don not know how to set it correctly, so we just copy source code to “/opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream-openpose/” and follow the README to compile.

Try ignoring libs/nvdsinfer from github

You mean we ignore this step:


We just ignored such step and changed the VERSION 4.0 to 5.0 in “/opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream-openpose/openpose_app/Makefile” , it can make and we run command:
./openpose-app ./nvinfer_config.txt COCO_val2014_000000000564.jpg
However, it displays a black screen.

@yohoohhh

There may be problems in post processing of OpenPose outputs because DS5.0 does not officially support post processing of OpenPose.

Maybe you have to customize the output parser function specifically for this model.
You can refer to customized output parser functions in following directories:
<deepstream_dir>/sources/objectDetector_FasterRCNN
<deepstream_dir>/sources/objectDetector_SSD
<deepstream_dir>/sources/objectDetector_Yolo

Here is how a typical parser function (call back function) looks like:

extern "C" bool NvDsInferParseCustomOpenPose(
    std::vector<NvDsInferLayerInfo> const& outputLayersInfo,
    NvDsInferNetworkInfo const& networkInfo,
    NvDsInferParseDetectionParams const& detectionParams,
    std::vector<NvDsInferParseObjectInfo>& objectList)
{
    // TODO: 
    // Add your customized post processing code here
    // TODO:
    // May be you also have to implement drawings of pose lines by yourself (using OpenCV?)
}

And add the corresponding config items in config file:

parse-bbox-func-name=NvDsInferParseCustomOpenPose
custom-lib-path=your_openpose_custom_parser_dir/your_openpose_custom_parser.so

Do not care much about the parameter name “parse-bbox-func-name”, it is just an entrance pointing to your parser function.

This is an official NVIDIA OSS repository here https://github.com/NVIDIA-AI-IOT/trt_pose to help you run open pose in standalone mode that is independent of DeepStream. Perhaps this repository is useful for you because you can easily analyse or debug outputs of open pose model.

Hopefully my suggestions are helpful for you.

@yohoohhh

An alternative solution is to study code inside libs/nvdsinfer from https://github.com/cding-nv/deepstream-openpose. There should be some customization in lib/nvdsinfer which is different from standard nvdsinfer implementation. You can try to merge customization into nvdsinfer of DeepStream 5.0