Deepstream_pose_estimation

I refer to this example (https://github.com/NVIDIA-AI-IOT/deepstream_pose_estimation)and the results obtained are as follows,model is resnet18_baseline_att_224x224_A_epoch_249.onnx converted by resnet18_baseline_att_224x224_A_epoch_249.pth . The white circle is the point pose.


config is :
gpu-id=0
net-scale-factor=0.0174292
offsets=123.675;116.28;103.53
onnx-file=resnet18_baseline_att_224x224_A_epoch_249.onnx
#labelfile-path=labels.txt
batch-size=1
process-mode=1
model-color-format=0
network-mode=2
num-detected-classes=4
interval=0
gie-unique-id=1
model-engine-file=resnet18_baseline_att_224x224_A_epoch_249.onnx_b1_gpu0_fp16.engine.enc
network-type=100
workspace-size=3000

the pose is not good. what should i do?

**• Hardware Platform (Jetson / GPU)**xavier
• DeepStream Version5.0
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

I had the same problem using Jetson tx2
Also tested with Tesla V100 and couln’t generate the engine.

I also am experiencing the exact same issue on a Xavier, does this code work at all?
Would be great to get some feedback on where I am going wrong with this.

Thanks.

Not fixed yet

Hello!

I am the author of the code and would love to help you out. I am trying to reproduce this problem but am not able to. Can I ask what model (DenseNet/ResNet) and what sample video you’re using ?