Support the questions of lightweight-human-pose-estimation

Please provide complete information as applicable to your setup.

**• Hardware Platform (Jetson / GPU)**GPU
• DeepStream Version5.1
• TensorRT Version7.2.2.3
**• NVIDIA GPU Driver Version (valid for GPU only)**460.39
**• Issue Type( questions, new requirements, bugs)**questions

I had finished pose estimation on edge device, the model sizes is too large nevertheless. It’s about 83 MB size. Therefore, I start to try to use lightweight-human-pose-estimation. It’s only 13 MB size.

However, I don’t understand how to change model for pose estimation. I already asked the author question about lightweight-human-pose-estimation in deepstream.

The following link is his response.
https://github.com/Daniil-Osokin/lightweight-human-pose-estimation.pytorch/issues/158

I know his mean, but where should I modify where the output?

I change program to the following program, that isn’t success. Pose Estimations white circle always appear top on the windows.
===================================

/Method to parse information returned from the model/
std::tuple<Vec2D, Vec3D>
parse_objects_from_tensor_meta(NvDsInferTensorMeta *tensor_meta)
{
void *cmap_data = tensor_meta->out_buf_ptrs_host[2];
NvDsInferDims &cmap_dims = tensor_meta->output_layers_info[2].inferDims;
void *paf_data = tensor_meta->out_buf_ptrs_host[3];
NvDsInferDims &paf_dims = tensor_meta->output_layers_info[3].inferDims;

Have you checked GitHub - NVIDIA-AI-IOT/deepstream_pose_estimation: This is a sample DeepStream application to demonstrate a human pose estimation pipeline.

Yes, I already checked it.
I have the link on my article.

Therefore, my mean I don’t understand how to change the program for using light-weight pose estimaation.

The following link is my revising method, I hope somebody teach me how to improve it.
https://github.com/Daniil-Osokin/lightweight-human-pose-estimation.pytorch/issues/158#issuecomment-808922373

Hey, sorry for the late, so you had deployed the light-weight pose estimation model on deepstream successfully?

No, I am fail to deploy the lightweight pose estimation model on deepstream. :(

Ok, can you run the model via tensorrt? I mean at least you need to make sure the model can be consumed by TensorRT.

I can transfer model to onnx and tensorrt engine.
Then, I can use them about pose estimation. However, it still has some circles on the windows top. Maybe these circles affect drawing lines.

Example is like the following.

https://github.com/Daniil-Osokin/lightweight-human-pose-estimation.pytorch/issues/158