Deepstream pose Estimation Output log "Killed"

Thank you

Hi,

Just find the reason for the blocking.

First, the sample outputs a video, rather than opening a display.
You can find the file at ${output folder}/Pose_Estimation.mp4

Then the sample requires the input video to be .h264 format.
For example, you can find the pipeline starts immediately with /opt/nvidia/deepstream/deepstream-5.1/samples/streams/sample_720p.h264 video.

Or you can use ffmpeg to convert the video into .h264 format.
For example:

$ ffmpeg -i video.mp4 -vcodec libx264 video.h264
$ ./deepstream-pose-estimation-app video.h264 [output]

We can get the output video in 40.778s with your video which is 18s long.
(there are some pipeline initialization time)

Thanks.

Thanks It worked on our video.

Is there any python implementation of deepstream pose estimation available ?

Hi,

Sorry that currently we only have a C++ pose estimation sample.
But you can modify it into python following the example in below GitHub:

Thanks.

Hello,

Thank you for your response. I will check this Repo for code conversion.

I was testing the pose estimation with pose_Estimation.onnx model it seems that everything is working properly. Then I was testing it with the resnet18_baseline_att_224x224_A_epoch_249.onnx model for the same video, it’s not predicting a single joint properly.