Please provide complete information as applicable to your setup.
• I am running deepstream in a docker in my Laptop, which is i7 9th Gen + GTX 1650(4 GB)
• DeepStream Version 5.0
• TensorRT Version 7.x
• NVIDIA GPU Driver Version 525.125.06
• Issue:- I am running the DeepStream Human Pose Estimation and for the first few minutes (approx 20-30 mins) I can see the GPU being used when it is showing the message “running…” at the end and there is no video produced even after 1hr of running the code!
• Output is like this:-
root@usr:/opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream_pose_estimation# ./deepstream-pose-estimation-app /home/deepstream/folder/videos/video.mp4 /home/deepstream/folder/videos/
Now playing: /home/deepstream/folder/videos/video.mp4
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:1523 Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream_pose_estimation/pose_estimation.onnx_b1_gpu0_fp16.engine open error
0:00:01.916855538 227 0x561e7d4ad470 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1690> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream_pose_estimation/pose_estimation.onnx_b1_gpu0_fp16.engine failed
0:00:01.916882673 227 0x561e7d4ad470 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1797> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream_pose_estimation/pose_estimation.onnx_b1_gpu0_fp16.engine failed, try rebuild
0:00:01.916890805 227 0x561e7d4ad470 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1715> [UID = 1]: Trying to create engine from model files
Input filename: /opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream_pose_estimation/densenet121_baseline_att_256x256_B_epoch_160.onnx
ONNX IR version: 0.0.7
Opset version: 9
Producer name: pytorch
Producer version: 1.10
Domain:
Model version: 0
Doc string:
INFO: …/nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.
INFO: …/nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: Detected 1 inputs and 3 output network tensors.
0:26:54.734927964 227 0x561e7d4ad470 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1748> [UID = 1]: serialize cuda engine to file: /opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream_pose_estimation/densenet121_baseline_att_256x256_B_epoch_160.onnx_b1_gpu0_fp16.engine successfully
INFO: …/nvdsinfer/nvdsinfer_model_builder.cpp:685 [FullDims Engine Info]: layers num: 4
0 INPUT kFLOAT input 3x256x256 min: 1x3x256x256 opt: 1x3x256x256 Max: 1x3x256x256
1 OUTPUT kFLOAT part_affinity_fields 64x64x42 min: 0 opt: 0 Max: 0
2 OUTPUT kFLOAT heatmap 64x64x18 min: 0 opt: 0 Max: 0
3 OUTPUT kFLOAT maxpool_heatmap 64x64x18 min: 0 opt: 0 Max: 0
0:26:54.750464320 227 0x561e7d4ad470 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus: [UID 1]: Load new model:deepstream_pose_estimation_config.txt sucessfully
Running…