Deepstream Pose Estimation with densenet121 weights not working

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) Jetson Nano
• DeepStream Version 6.0
• JetPack Version (valid for Jetson only) 4.6
• TensorRT Version 8
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

based on GitHub - NVIDIA-AI-IOT/deepstream_pose_estimation: This is a sample DeepStream application to demonstrate a human pose estimation pipeline., two pth files are provided as the weights for the repo. One would be resnet18_baseline_att_224x224_A_epoch_249.pth, the ohter is densenet121_baseline_att_256x256_B_epoch_160.pth

The process is to use trt_pose/ at master · NVIDIA-AI-IOT/trt_pose · GitHub to export from pth to onnx file.

However densenet121_baseline_att_256x256_B_epoch_160 is not working as expected. It is showing random keypoints on a given image.


Could you try the shared in the below topic to see if it helps?

If not, could you share the converted ONNX file and the testing video with us?
We want to reproduce this issue in our environment first.


Please note the model weights intended to try is the other model share on the trt_pose github

I converted into onnx through the original export file with output_names = [“part_affinity_fields”, “heatmap”, “maxpool_heatmap”]

Image file trial on

I realised u change the resnet18 output names to 262 264. How do you find out the output names from .pth file.

Same for the densenet which I wanted to try, how to find out the outputnames so that I can export it correctly into onnx


Although it is for another model, the issue may be similar.
Since the output layer name is updated frequently, it may lead to some unexpected behavior.

Below is a visualized website, you can upload the ONNX model and find out the output layer name easily.


I am looking at the netron app, this is for resnet18 which u changed to 262 and 264 for output names, which module actually tells u this? Mind doing a screenshot ?

Trying my luck on checking if there are any updates on this.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.