Pose Estimation with DeepStream does not work

Hardware Platform : Jetson Nano
DeepStream Version: 5.0.1
JetPack Version : 4.4.1

I installed Pytorch and TorchVision as in the following guide.

Pytorch: 1.7.0
TorchVision: 0.8.1

And I downloaded it from GitHub - NVIDIA-AI-IOT/deepstream_pose_estimation: This is a sample DeepStream application to demonstrate a human pose estimation pipeline. and compiled it.
Then I tried something like this:

sudo ./deepstream-pose-estimation-app …/streams/test.mp4 ./

The error message is as follows.

Now playing: …/streams/test.mp4
Opening in BLOCKING MODE
Opening in BLOCKING MODE
ERROR: Deserialize engine failed because file path: /home/dssdk/sources/apps/pose_estimation/pose_estimation.onnx_b1_gpu0_fp16.engine open error
0:00:06.342003619 18380 0x55b16b5000 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1690> [UID = 1]: deserialize engine from file :/home/robopia/aiengine201/sources/apps/pose_estimation/pose_estimation.onnx_b1_gpu0_fp16.engine failed
0:00:06.342111590 18380 0x55b16b5000 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1797> [UID = 1]: deserialize backend context from engine from file :/home/robopia/aiengine201/sources/apps/pose_estimation/pose_estimation.onnx_b1_gpu0_fp16.engine failed, try rebuild
0:00:06.342146538 18380 0x55b16b5000 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1715> [UID = 1]: Trying to create engine from model files

Input filename: /home/dssdk/sources/apps/pose_estimation/pose_estimation.onnx
ONNX IR version: 0.0.4
Opset version: 7
Producer name: pytorch
Producer version: 1.3
Domain:
Model version: 0
Doc string:

WARNING: [TRT]: onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.


Someone please help
thanks.

And I can’t speak English well.

Hi,

The error indicates that there is no TensorRT engine exist in your workspace.
And deepstream will convert one from the ONNX file directly.

If there is a file located at /home/dssdk/sources/apps/pose_estimation/pose_estimation.onnx*.
The pipeline should work normally.

Thanks.