**• Hardware Platform (Jetson / GPU)**xavier • DeepStream Version5.0 • JetPack Version (valid for Jetson only) • TensorRT Version • NVIDIA GPU Driver Version (valid for GPU only) • Issue Type( questions, new requirements, bugs) • How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) • Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
I also am experiencing the exact same issue on a Xavier, does this code work at all?
Would be great to get some feedback on where I am going wrong with this.
I am the author of the code and would love to help you out. I am trying to reproduce this problem but am not able to. Can I ask what model (DenseNet/ResNet) and what sample video you’re using ?
Is this how you are running the app? sudo ./deepstream-pose-estimation-app ../../../../samples/streams/sample_720p.h264 /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream_pose_estimation_master/
Are you not getting a human pose at all? In some of the sample streams since the faces are blurred - it is possible that the model just might be misclassifying some of the points and is not able to efficiently generate relationships for detected body parts between the face and say, the shoulders, and thus not drawing a human pose at all. But you should definitely have a pose drawn onto the video for most frames. Try using a different h264 encoded stream to see if the problem persists?
Maybe your stream isn’t being decoded properly if you’re using a mp4 file. Try encoding a different video file as a h264 stream to see if the problem persists? One easy way to do this is to use ffmpeg. sudo apt install ffmpeg ffmpeg -i input.mp4 -vcodec copy -an -bsf:v h264_mp4toannexb output.h264
It is also possible that your model can’t be parsed properly. Did you use the isaac export script to generate the ONNX file?
Upon further inspection, it seems I can only reproduce your result when the model is not being converted to ONNX successfully. Does the TRT engine show this information upon inferencing when you run the app?
I got this working. Works like a charm. Perfect. Great work.
For those experiencing problems with inference please use this modified onnx file - http://robbiwu.com/pose_estimation.onnx
when i use trt_pose model resnet18_baseline_att_224x224_A_epoch_249.pth to convert to onnx model,then i got the same wrong result, can you provide your trained model resnet18_baseline_att_224x224_A_epoch_249.pth,and provide the convert file, i can not use trt_pose export_for_isaac.py to get right onnx model @anujsaharan, after deepstream read model,my model output is channel last like 56x56x42 56x56x18, although i change the channel,the onnx model result is not right
Hi @anujsaharan! Great work, thanks. I was able to run the code with the model you provided without any issue. But converting from trt_pose models outputs a similar result with RayZhang’s output frame. I’m converting with export_for_isaac.py without any issues but deepstream_pose_estimation reports the output differently: $ python ./export_for_isaac.py --input_checkpoint resnet18_baseline_att_224x224_A_epoch_249.pth
Hi @anujsaharan !
I have a same problem like RayZhang.
The problem occurs when I convert/use models obtained from “export_for_isaac.py” in trt_pose.
However, your model “pose_estimation.onnx” in “deepstream_pose_estimation” works fine.
Could you tell me how to make your default “pose_estimation.onnx” ? Thansk!