Testing pose-estimator in deepstream

• Hardware Platform (Jetson / GPU)
xavier
• DeepStream Version
6.0
• JetPack Version (valid for Jetson only)
4.6
• TensorRT Version
TRT 8.0.1
• NVIDIA GPU Driver Version (valid for GPU only)
10.2

I am testing humanpose application.

I have already created onnx file for the given model here.
resnet18_baseline_att_224x224_A_epoch_249.onnx (79.2 MB)

The configuration file is

[property]
gpu-id=0
net-scale-factor=0.0174292
offsets=123.675;116.28;103.53
onnx-file=densenet121_baseline_att_256x256_B_epoch_160.onnx
labelfile-path=labels.txt
batch-size=1
process-mode=1
model-color-format=0
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=2
num-detected-classes=4
interval=0
gie-unique-id=1
model-engine-file=densenet121_baseline_att_256x256_B_epoch_160.onnx_b1_gpu0_fp16.engine
network-type=100
workspace-size=3000

When I run the app as
./deepstream-pose-estimation-app /opt/nvidia/deepstream/deepstream-6.0/samples/stream/sample_720p.h264 sample.mp4

Output mp4 has no detection. What is wrong with my test?
You can see the output video, there are circles plotted. But not at right place.

Any response? Why it doesn’t work? Did I do sth wrong?

Sorry for the late.
I will try to see and get back to you later.

Thanks I have one project to use that app

Any changes made in the post processing for the model? if yes, you need to change post processing code accordingly.

I didn’t do anything. Just convert to onnx as mentioned and follow the instructions.

The post processing in the github is for model output dimension format in CHW, but the model you provided is in HWC format. you need to change the post processing accordingly.

I used this export_for_isaac.py provided in deepstream_pose_estimation. Model also downloaded from the mentioned link.. Which post processing part I need to update?
Is it possible to change in export_for_isaac.py?

Supposed you used NVIDIA-AI-IOT/deepstream_pose_estimation: This is a sample DeepStream application to demonstrate a human pose estimation pipeline. (github.com), if you need this, you need to change the post processing accordingly.
There no export_for_isaac.py in this github, i think you should use another related github app? The model you attached is from another github NVIDIA-AI-IOT/trt_pose: Real-time pose estimation accelerated with NVIDIA TensorRT (github.com)

No they mentioned in Getting Started place to use which model and export utility code. I just used the mentioned model and code.

I changed in export program from 0,2,3,1 (nhwc) to 0,1,2,3 (nchw)

I changed in export program from 0,2,3,1 (nhwc) to 0,1,2,3 (nchw)

Did it solve the issue?

I see some detection on person for resnet model. But not accurate.
Densenet has no detection. I’ll upload detection tomorrow.
Current application works only for h264 video, not for other format.
So I update application to work for any videos format and test to see it really works or not.
I have only one h264 video, so it works or doesn’t work, still don’t know.

The output tensor shapes are changed from nchw to nhwc. I checked using netron.

Here is the output tensor shapes after changing from nhwc to nchw.

NHWC

NCHW
nchw