Deepstream pose estimation model: getting error while running

Machine setup below:

**• Hardware Platform ( GPU) - GTX 1650
**• DeepStream Version- 5.0.1
**• TensorRT Version: 7.2.1
**• NVIDIA GPU Driver Version (valid for GPU only): 455.45

After creating ONNX file, and running make, When i run the command like below
arvind@arvind:/opt/nvidia/deepstream/…/deepstream_pose_estimation$ sudo ./deepstream-pose-estimation-app ./video.mp4 ./
One element could not be created. Exiting.
I am getting this error, I replaces the nvosd.so file in lib too. need suggestion on it.

I thought nvosd is creating error, so i tried to remove it from deepstream_pose_estimation_app.cpp. But then it terminated with error below:
(deepstream-pose-estimation-app:16180): GStreamer-CRITICAL **: 03:16:02.045: gst_element_get_static_pad: assertion ‘GST_IS_ELEMENT (element)’ failed
Unable to get sink pad
Now playing: ./video.mp4
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:1523 Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream_pose_estimation/pose_estimation.onnx_b1_gpu0_fp16.engine open error
0:00:00.360042886 16180 0x555db5da9a30 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1690> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream_pose_estimation/pose_estimation.onnx_b1_gpu0_fp16.engine failed
0:00:00.360106390 16180 0x555db5da9a30 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1797> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream_pose_estimation/pose_estimation.onnx_b1_gpu0_fp16.engine failed, try rebuild
0:00:00.360130624 16180 0x555db5da9a30 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1715> [UID = 1]: Trying to create engine from model files

Input filename: /opt/nvidia/deepstream/deepstream-5.0/samples/models/resnet18_baseline_att_224x224_A_epoch_249.onnx
ONNX IR version: 0.0.6
Opset version: 9
Producer name: pytorch
Producer version: 1.7
Domain:
Model version: 0
Doc string:

ERROR: …/nvdsinfer/nvdsinfer_func_utils.cpp:33 [TRT]: …/rtSafe/cuda/cudaConvolutionRunner.cpp (483) - Cudnn Error in executeConv: 8 (CUDNN_STATUS_EXECUTION_FAILED)
ERROR: …/nvdsinfer/nvdsinfer_func_utils.cpp:33 [TRT]: …/rtSafe/cuda/cudaConvolutionRunner.cpp (483) - Cudnn Error in executeConv: 8 (CUDNN_STATUS_EXECUTION_FAILED)
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:1186 Build engine failed from config file
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:884 failed to build trt engine.
0:00:20.055278344 16180 0x555db5da9a30 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1735> [UID = 1]: build engine file failed
0:00:20.056496561 16180 0x555db5da9a30 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1821> [UID = 1]: build backend context failed
0:00:20.056506295 16180 0x555db5da9a30 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1148> [UID = 1]: generate backend failed, check config file settings
0:00:20.056531084 16180 0x555db5da9a30 WARN nvinfer gstnvinfer.cpp:809:gst_nvinfer_start: error: Failed to create NvDsInferContext instance
0:00:20.056535075 16180 0x555db5da9a30 WARN nvinfer gstnvinfer.cpp:809:gst_nvinfer_start: error: Config file path: deepstream_pose_estimation_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Running…
ERROR from element primary-nvinference-engine: Failed to create NvDsInferContext instance
Error details: gstnvinfer.cpp(809): gst_nvinfer_start (): /GstPipeline:deepstream-tensorrt-openpose-pipeline/GstNvInfer:primary-nvinference-engine:
Config file path: deepstream_pose_estimation_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Returned, stopping playback
Deleting pipeline

Which model are you using?

I am trying the following with pretrained trt_pose model- resnet18_baseline_att_224x224_A

I can run the model now, but in output video file Pose_Estimation.mp4, i dont see skeleton points properly. what can be reason behind it?

There is no update from you for a period, assuming this is not an issue any more.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

Have you modified the source code or other configs such as batch size of the model?