Bodypose2d - To get keypoints of all the persons identified in a frame

The bodypose2d is being called to get the human keypoints for all the persons that are identified by the model. But the code is not giving any output with the keypoints as output in JSON format or any other format for any further processing as we had seen in the bodypose3d code . Getting the keypoints in the JSON format would be of great help instead of getting processed video as output. Is there any Python/C++ code available for doing the bodypose2d prediction and get the keypoints of all the persons identified in the frame as a json output similar to the one implemented in Bodypose3d.

• Hardware Platform (Jetson / GPU) NVIDIA A100-SXM
• DeepStream Version 6.0
• TensorRT Version 8.0.1
• NVIDIA GPU Driver Version (valid for GPU only) 470.57.02
• Issue Type( questions, new requirements, bugs) question
• How to reproduce the issue? deepstream_tao_apps/apps/tao_others/deepstream-bodypose2d-app at master · NVIDIA-AI-IOT/deepstream_tao_apps · GitHub)/ deepstream_bodypose2d_app.cpp

no,
you need to modify pgie_pad_buffer_probe of bodypose2d code, please refer to sgie_src_pad_buffer_probe of bodypose3d code.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.