Please provide complete information as applicable to your setup.
**• Hardware Platform (Jetson / GPU)**GPU
• DeepStream Version5.1
• TensorRT Version22.214.171.124
**• NVIDIA GPU Driver Version (valid for GPU only)**460.39
**• Issue Type( questions, new requirements, bugs)**questions
I had finished pose estimation on edge device, the model sizes is too large nevertheless. It’s about 83 MB size. Therefore, I start to try to use lightweight-human-pose-estimation. It’s only 13 MB size.
However, I don’t understand how to change model for pose estimation. I already asked the author question about lightweight-human-pose-estimation in deepstream.
The following link is his response.
I know his mean, but where should I modify where the output?
I change program to the following program, that isn’t success. Pose Estimations white circle always appear top on the windows.
/Method to parse information returned from the model/
void *cmap_data = tensor_meta->out_buf_ptrs_host;
NvDsInferDims &cmap_dims = tensor_meta->output_layers_info.inferDims;
void *paf_data = tensor_meta->out_buf_ptrs_host;
NvDsInferDims &paf_dims = tensor_meta->output_layers_info.inferDims;