Deepstream Python Error

Hardware Platform: [ AGX Xavier™ Developer Kit ]
Software Version: [ Jetpack 4.4 ]
Deepstream 5.0

I am getting this error while running Python Deepstream test code
The Sample codes installed works ( deepstream-app -c config | works )

xxxx@xxxx-desktop:/opt/nvidia/deepstream/deepstream-5.0/sources/python/apps/deepstream-test1$ python3 deepstream_test_1.py …/…/…/…/samples/streams/sample_720p.mp4

Creating Pipeline

Creating Source

Creating H264Parser

Creating Decoder

Creating EGLSink

Playing file …/…/…/…/samples/streams/sample_720p.mp4
Warn: ‘threshold’ parameter has been deprecated. Use ‘pre-cluster-threshold’ instead.
Adding elements to Pipeline

Linking elements in the Pipeline

Starting pipeline

Using winsys: x11
Opening in BLOCKING MODE
ERROR: Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-5.0/sources/python/apps/deepstream-test1/opt/nvidia/deepstream/deepstream-5.0/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine open error
0:00:01.436452314 30834 0x341ea190 WARN nvinfer gstnvinfer.cpp:599:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1566> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-5.0/sources/python/apps/deepstream-test1/opt/nvidia/deepstream/deepstream-5.0/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine failed
0:00:01.436692901 30834 0x341ea190 WARN nvinfer gstnvinfer.cpp:599:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1673> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-5.0/sources/python/apps/deepstream-test1/opt/nvidia/deepstream/deepstream-5.0/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine failed, try rebuild
0:00:01.436883821 30834 0x341ea190 INFO nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1591> [UID = 1]: Trying to create engine from model files
ERROR: Cannot access caffemodel file ‘/opt/nvidia/deepstream/deepstream-5.0/sources/python/apps/deepstream-test1/opt/nvidia/deepstream/deepstream-5.0/samples/models/Primary_Detector/resnet10.caffemodel’
ERROR: failed to build network since parsing model errors.
ERROR: failed to build network.
0:00:01.437789173 30834 0x341ea190 ERROR nvinfer gstnvinfer.cpp:596:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1611> [UID = 1]: build engine file failed
0:00:01.437844728 30834 0x341ea190 ERROR nvinfer gstnvinfer.cpp:596:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1697> [UID = 1]: build backend context failed
0:00:01.437876025 30834 0x341ea190 ERROR nvinfer gstnvinfer.cpp:596:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1024> [UID = 1]: generate backend failed, check config file settings
0:00:01.437926108 30834 0x341ea190 WARN nvinfer gstnvinfer.cpp:781:gst_nvinfer_start: error: Failed to create NvDsInferContext instance
0:00:01.437951517 30834 0x341ea190 WARN nvinfer gstnvinfer.cpp:781:gst_nvinfer_start: error: Config file path: dstest1_pgie_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Error: gst-resource-error-quark: Failed to create NvDsInferContext instance (1): /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(781): gst_nvinfer_start (): /GstPipeline:pipeline0/GstNvInfer:primary-inference:
Config file path: dstest1_pgie_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED

hI @airpixin,
Try adding “force-implicit-batch-dim=1” as below.
But, the offically DeepStream for Jetpack4.4 is DeepStream5.0GA which is coming very soon.

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-file=…/…/…/…/samples/models/Primary_Detector/resnet10.caffemodel
proto-file=…/…/…/…/samples/models/Primary_Detector/resnet10.prototxt
model-engine-file=…/…/…/…/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
labelfile-path=…/…/…/…/samples/models/Primary_Detector/labels.txt
int8-calib-file=…/…/…/…/samples/models/Primary_Detector/cal_trt.bin
force-implicit-batch-dim=1
batch-size=1
network-mode=1
num-detected-classes=4
interval=0
gie-unique-id=1
output-blob-names=conv2d_bbox;conv2d_cov/Sigmoid

2 Likes

Thanks ,
I tried what you asked for.
There is no error now. But the execution is stuck at

NvMMLiteBlockCreate : Block : BlockType = 261

xxxx@xxxx-desktop:/opt/nvidia/deepstream/deepstream-5.0/sources/python/apps/deepstream-test1$ python3 deepstream_test_1.py …/…/…/…/samples/streams/sample_720p.mp4
Creating Pipeline

Creating Source

Creating H264Parser

Creating Decoder

Creating EGLSink

Playing file …/…/…/…/samples/streams/sample_720p.mp4
Warn: ‘threshold’ parameter has been deprecated. Use ‘pre-cluster-threshold’ instead.
Adding elements to Pipeline

Linking elements in the Pipeline

Starting pipeline

Using winsys: x11
Opening in BLOCKING MODE
0:00:03.100719469 10327 0x3aecab90 INFO nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1577> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-5.0/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
INFO: [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_1 3x368x640
1 OUTPUT kFLOAT conv2d_bbox 16x23x40
2 OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40

0:00:03.101016568 10327 0x3aecab90 INFO nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1681> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-5.0/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
0:00:03.105303358 10327 0x3aecab90 INFO nvinfer gstnvinfer_impl.cpp:311:notifyLoadModelStatus: [UID 1]: Load new model:dstest1_pgie_config.txt sucessfully
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261

This is the error that i get when i change the input video file.

Error: gst-stream-error-quark: Failed to parse stream (7): gstbaseparse.c(2954): gst_base_parse_check_sync (): /GstPipeline:pipeline0/GstH264Parse:h264-parser

xxxx@xxxx-desktop:/opt/nvidia/deepstream/deepstream-5.0/sources/python/apps/deepstream-test1$ python3 deepstream_test_1.py /home/xxxx/Videos/1.mp4
Creating Pipeline

Creating Source

Creating H264Parser

Creating Decoder

Creating EGLSink

Playing file /home/airpixagx1/Videos/1.mp4
Warn: ‘threshold’ parameter has been deprecated. Use ‘pre-cluster-threshold’ instead.
Adding elements to Pipeline

Linking elements in the Pipeline

Starting pipeline

Using winsys: x11
Opening in BLOCKING MODE
0:00:03.108948162 10800 0x3d31b590 INFO nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1577> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-5.0/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
INFO: [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_1 3x368x640
1 OUTPUT kFLOAT conv2d_bbox 16x23x40
2 OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40

0:00:03.109184362 10800 0x3d31b590 INFO nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1681> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-5.0/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
0:00:03.112393532 10800 0x3d31b590 INFO nvinfer gstnvinfer_impl.cpp:311:notifyLoadModelStatus: [UID 1]: Load new model:dstest1_pgie_config.txt sucessfully
Error: gst-stream-error-quark: Failed to parse stream (7): gstbaseparse.c(2954): gst_base_parse_check_sync (): /GstPipeline:pipeline0/GstH264Parse:h264-parser

As mentioned in the README of this sample, this sample only accpets <h264_elementary_stream>, so you need use H264 video instead of mp4 video.

Thanks!

1 Like

What about this issue ?

I have told what’s the mistake in your command.

What else do you want from our side???

Thanks for prompt reply,

I understand the issue about the H264 file.Thanks for letting me know.

Sorry I forgot to attach this block in the last reply.

Based on your initial changes I added “force-implicit-batch-dim=1” . And it works without giving any error . But the execution is stuck at this line.

NvMMLiteBlockCreate : Block : BlockType = 261

xxxx@xxxx-desktop:/opt/nvidia/deepstream/deepstream-5.0/sources/python/apps/deepstream-test1$ python3 deepstream_test_1.py …/…/…/…/samples/streams/sample_720p.mp4
Creating Pipeline

Creating Source

Creating H264Parser

Creating Decoder

Creating EGLSink

Playing file …/…/…/…/samples/streams/sample_720p.mp4
Warn: ‘threshold’ parameter has been deprecated. Use ‘pre-cluster-threshold’ instead.
Adding elements to Pipeline

Linking elements in the Pipeline

Starting pipeline

Using winsys: x11
Opening in BLOCKING MODE
0:00:03.100719469 10327 0x3aecab90 INFO nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1577> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-5.0/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
INFO: [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_1 3x368x640
1 OUTPUT kFLOAT conv2d_bbox 16x23x40
2 OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40

0:00:03.101016568 10327 0x3aecab90 INFO nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1681> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-5.0/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
0:00:03.105303358 10327 0x3aecab90 INFO nvinfer gstnvinfer_impl.cpp:311:notifyLoadModelStatus: [UID 1]: Load new model:dstest1_pgie_config.txt sucessfully
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261

You could refer to test3 python sample which is using uridecodebin so that any type of input (e.g. RTSP/File), any GStreamer supported container format, and any codec can be used as input.
Please note again, test1 python sample only accept h264 elementary stream.

Thank you !!