Pipeline fails to create DeepStream test 4

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
Jetson
• DeepStream Version
Latest
• Issue Type( questions, new requirements, bugs)
question
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
gst-launch-1.0 filesrc location=/opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.h264 ! h264parse ! nvv4l2decoder ! nvstreammux0.sink_0 nvstreammux name=nvstreammux0 batch-size=1 width=1920 height=1080 ! nvinfer batch-size=1 config-file-path=/opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream-test4/dstest4_pgie_config.txt ! nvvideoconvert ! nvdsosd ! nvegltransform ! nveglglessink

This pipeline reports back:

Setting pipeline to PAUSED ...
Opening in BLOCKING MODE 
ERROR: Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream-test4/../../../../samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine open error
0:00:01.798975578 14538   0x557ecf4640 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1690> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream-test4/../../../../samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine failed
0:00:01.799054694 14538   0x557ecf4640 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1797> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream-test4/../../../../samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine failed, try rebuild
0:00:01.799087039 14538   0x557ecf4640 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1715> [UID = 1]: Trying to create engine from model files
WARNING: INT8 not supported by platform. Trying FP16 mode.
INFO: [TRT]: Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.
INFO: [TRT]: Detected 1 inputs and 2 output network tensors.
0:00:26.033950041 14538   0x557ecf4640 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1748> [UID = 1]: serialize cuda engine to file: /opt/nvidia/deepstream/deepstream-5.0/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_fp16.engine successfully
INFO: [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input_1         3x368x640       
1   OUTPUT kFLOAT conv2d_bbox     16x23x40        
2   OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40         

0:00:26.076470306 14538   0x557ecf4640 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<nvinfer0> [UID 1]: Load new model:./dstest4_pgie_config.txt sucessfully
Pipeline is PREROLLED ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
NvMMLiteOpen : Block : BlockType = 261 
NVMEDIA: Reading vendor.tegra.display-size : status: 6 
NvMMLiteBlockCreate : Block : BlockType = 261 
0:00:26.263715254 14538   0x557e642400 WARN                 nvinfer gstnvinfer.cpp:1975:gst_nvinfer_output_loop:<nvinfer0> error: Internal data stream error.
0:00:26.264013228 14538   0x557e642400 WARN                 nvinfer gstnvinfer.cpp:1975:gst_nvinfer_output_loop:<nvinfer0> error: streaming stopped, reason not-linked (-1)
ERROR: from element /GstPipeline:pipeline0/GstNvInfer:nvinfer0: Internal data stream error.
Additional debug info:
/dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(1975): gst_nvinfer_output_loop (): /GstPipeline:pipeline0/GstNvInfer:nvinfer0:
streaming stopped, reason not-linked (-1)
Execution ended after 0:00:00.186390246
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
Setting pipeline to NULL ...
Freeing pipeline ...

And it fails to run.

The pipeline you posted can work well in my Jetson board.

I have the same problem, with DS 4.

Actually, the pipeline I posted did work, however, when I add the MQTT pipeline in, it does not work:

gst-launch-1.0 filesrc location=/opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.h264 ! h264parse ! nvv4l2decoder ! nvstreammux0.sink_0 nvstreammux name=nvstreammux0 batch-size=1 width=1920 height=1080 ! nvinfer batch-size=1 config-file-path=/opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream-test4/dstest4_pgie_config.txt queue ! tee name=infres \
infres. ! nvvideoconvert ! nvdsosd ! nvegltransform ! nveglglessink \
infres. ! nvmsgconv config=/opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream-test4/dstest4_msgconv_config.txt payload-type=0 ! nvmsgbroker proto-lib=/opt/nvidia/deepstream/deepstream-5.0/sources/libs/aws_protocol_adaptor/device_client/libnvds_aws_proto.so config=/opt/nvidia/deepstream/deepstream-5.0/sources/libs/aws_protocol_adaptor/device_client/cfg_aws.txt topic=ryantest conn-str=aidnfuomgla6i-ats.iot.us-east-1.amazonaws.com:443

We are using AWS MQTT broker, if that’s not available to you, could you provide a sample Kafka or other MQTT broker command line?

Deepstream supports Azure MQTT Protocol with nvmsgbroker. The deepstream sample is in /opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream-test4. Please read the README file under the folder.

https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_plugin_gst-nvmsgbroker.html#

According to the docs, we can put own our library via the
‘proto-lib=’ parameter which is what we are doing. We have build our own library for AWS which works fine with testapp4, but not in this case. @Fiona.Chen are you saying Deepstream nvmsgbrooker pipeline plugin ONLY supports Azure and the proto-lib functionality is broken?

No. nvmsgbroker can support customized protocol adapter.Gst-nvmsgbroker — DeepStream 6.3 Release documentation

Since your new proto lib can work with deepstream-test4, there is no problem with the gst-launch-1.0 pipeline. The only difference between deepstream-test4 and the gst-launch pipeline is that deepstream-test4 use “osd_sink_pad_buffer_probe” probe function to generate NvDsEventMsgMetawhich is the message can be send to cloud but the gst-launch pipeline can not. The nvmsgconv must be used in a gstreamer application but not gst-launch pipeline.