Support the questions of lightweight-human-pose-estimation

Please provide complete information as applicable to your setup.

**• Hardware Platform (Jetson / GPU)**GPU
• DeepStream Version5.1
• TensorRT Version7.2.2.3
**• NVIDIA GPU Driver Version (valid for GPU only)**460.39
**• Issue Type( questions, new requirements, bugs)**questions

I had finished pose estimation on edge device, the model sizes is too large nevertheless. It’s about 83 MB size. Therefore, I start to try to use lightweight-human-pose-estimation. It’s only 13 MB size.

However, I don’t understand how to change model for pose estimation. I already asked the author question about lightweight-human-pose-estimation in deepstream.

The following link is his response.
https://github.com/Daniil-Osokin/lightweight-human-pose-estimation.pytorch/issues/158

I know his mean, but where should I modify where the output?

I change program to the following program, that isn’t success. Pose Estimations white circle always appear top on the windows.
===================================

/Method to parse information returned from the model/
std::tuple<Vec2D, Vec3D>
parse_objects_from_tensor_meta(NvDsInferTensorMeta *tensor_meta)
{
void *cmap_data = tensor_meta->out_buf_ptrs_host[2];
NvDsInferDims &cmap_dims = tensor_meta->output_layers_info[2].inferDims;
void *paf_data = tensor_meta->out_buf_ptrs_host[3];
NvDsInferDims &paf_dims = tensor_meta->output_layers_info[3].inferDims;

Have you checked GitHub - NVIDIA-AI-IOT/deepstream_pose_estimation: This is a sample DeepStream application to demonstrate a human pose estimation pipeline.

Yes, I already checked it.
I have the link on my article.

Therefore, my mean I don’t understand how to change the program for using light-weight pose estimaation.

The following link is my revising method, I hope somebody teach me how to improve it.
https://github.com/Daniil-Osokin/lightweight-human-pose-estimation.pytorch/issues/158#issuecomment-808922373

Hey, sorry for the late, so you had deployed the light-weight pose estimation model on deepstream successfully?

No, I am fail to deploy the lightweight pose estimation model on deepstream. :(

Ok, can you run the model via tensorrt? I mean at least you need to make sure the model can be consumed by TensorRT.

I can transfer model to onnx and tensorrt engine.
Then, I can use them about pose estimation. However, it still has some circles on the windows top. Maybe these circles affect drawing lines.

Example is like the following.

https://github.com/Daniil-Osokin/lightweight-human-pose-estimation.pytorch/issues/158

What’s the meaning? Also what’s the batch size you are using?

image

I mean that, my batch size use 1.

I think it should be related to the post process , I think you can debug it via checking the source code of post process logic.

Yes, I understand that.
But I already fix the program, it still have mistakes.

https://github.com/Daniil-Osokin/lightweight-human-pose-estimation.pytorch/issues/158

===================================

std::tuple<Vec2D, Vec3D>
parse_objects_from_tensor_meta(NvDsInferTensorMeta *tensor_meta)
{
void *cmap_data = tensor_meta->out_buf_ptrs_host[2];
NvDsInferDims &cmap_dims = tensor_meta->output_layers_info[2].inferDims;
void *paf_data = tensor_meta->out_buf_ptrs_host[3];
NvDsInferDims &paf_dims = tensor_meta->output_layers_info[3].inferDims;

static Vec2D topology{
{0, 1, 1, 8},
{2, 3, 8, 9},
{4, 5, 9, 10},
{6, 7, 1, 11},
{8, 9, 11, 12},
{10, 11, 12, 13},
{12, 13, 1, 2},
{14, 15, 2, 3},
{16, 17, 3, 4},
{18, 19, 2, 16},
{20, 21, 1, 5},
{22, 23, 5, 6},
{24, 25, 6, 7},
{26, 27, 5, 17},
{28, 29, 1, 0},
{30, 31, 0, 14},
{32, 33, 0, 15},
{34, 35, 14, 16},
{36, 37, 15, 17}};

Hey, customer

I think you need to firstly understand the exact post process of the lightweight model, then you can try to deploy it to deepstream, then we can try to help you to debug some issue if it’s a real issue caused by deepstream sdk. For postprocess issue, I think you can debug it yourself .