After creating ONNX file, and running make, When i run the command like below
arvind@arvind:/opt/nvidia/deepstream/…/deepstream_pose_estimation$ sudo ./deepstream-pose-estimation-app ./video.mp4 ./ One element could not be created. Exiting.
I am getting this error, I replaces the nvosd.so file in lib too. need suggestion on it.
I thought nvosd is creating error, so i tried to remove it from deepstream_pose_estimation_app.cpp. But then it terminated with error below:
(deepstream-pose-estimation-app:16180): GStreamer-CRITICAL **: 03:16:02.045: gst_element_get_static_pad: assertion ‘GST_IS_ELEMENT (element)’ failed
Unable to get sink pad
Now playing: ./video.mp4
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:1523 Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream_pose_estimation/pose_estimation.onnx_b1_gpu0_fp16.engine open error
0:00:00.360042886 16180 0x555db5da9a30 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1690> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream_pose_estimation/pose_estimation.onnx_b1_gpu0_fp16.engine failed
0:00:00.360106390 16180 0x555db5da9a30 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1797> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream_pose_estimation/pose_estimation.onnx_b1_gpu0_fp16.engine failed, try rebuild
0:00:00.360130624 16180 0x555db5da9a30 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1715> [UID = 1]: Trying to create engine from model files
There is no update from you for a period, assuming this is not an issue any more.
Hence we are closing this topic. If need further support, please open a new one.
Thanks
Have you modified the source code or other configs such as batch size of the model?