Unable to test the sample application in Deepstream

• Hardware Platform: HPE Proliant-DL-385-GEN-10-Plus
• Graphics Card : Tesla T4
• OS : Ubuntu 18.04 (64-bit)
• NVIDIA GPU Driver Version : 450.102.04
• Cuda Version : 10.2
• cuDNN Version: 8.0.2.39
• TensorRT Version : 7.1.3.4
• DeepStream Version : 5.0

• Issue Type:Unable to run deepstream sample application
• How to reproduce the issue ?
Download deepstream-5.0 and then Download the models instructed in the ReadMe file
Then got to:

/opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app

Command:

deepstream-app -c source30_1080p_dec_infer-resnet_tiled_display_int8.txt

Output Error:

** ERROR: <create_osd_bin:77>: Failed to create ‘nvosd0’
** ERROR: <create_osd_bin:119>: create_osd_bin failed
** ERROR: <create_processing_instance:802>: create_processing_instance failed
** ERROR: <create_pipeline:1296>: create_pipeline failed
** ERROR: main:636: Failed to create pipeline
Quitting
App run failed

Can anyone help me to fix this issue

can you get the output of gst-inspect-1.0 nvosd
remove the cache, rm ~/.cache/gstreamer-1.0/ -rf and try again.
you can get more debug log for analysis
GST_DEBUG=:5 ‘sample app to run’

After I removed the cache it is working.
While I run application by the below format

lpr-test-app [language mode:1-us 2-chinese]
  [sink mode:1-output as 264 stream file 2-no output 3-display on screen]
  [ROI enable:0-disable ROI 1-enable ROI]
  [input mp4 file path and name] [input mp4 file path and name] ... [input mp4 file path and name]
  [output 264 file path and name]

./deepstream-lpr-app 1 1 0 TeslaCar.mp4 TFile.mp4 it is working fine

But if I run the application by this command

. /deepstream-lpr-app 1 3 0 TeslaCar.mp4 TFile.mp4 unable to see the live detection preview

The following is the error occured:

GST_DEBUG=:5 ./deepstream-lpr-app 1 3 0 TeslaCar.mp4 TFile.mp4

Request sink_0 pad from streammux
Warning: ‘input-dims’ parameter has been deprecated. Use ‘infer-dims’ instead.
Warning: ‘input-dims’ parameter has been deprecated. Use ‘infer-dims’ instead.
Unknown or legacy key specified ‘process_mode’ for group [property]
Now playing: 1
libEGL warning: DRI2: failed to authenticate
0:00:01.160744742 20828 0x565053f27290 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 3]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1701> [UID = 3]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-5.0/sources/deepstream_lpr_app/models/LP/LPR/lpr_us_onnx_b16.engine
INFO: …/nvdsinfer/nvdsinfer_model_builder.cpp:685 [FullDims Engine Info]: layers num: 3
0 INPUT kFLOAT image_input 3x48x96 min: 1x3x48x96 opt: 4x3x48x96 Max: 16x3x48x96
1 OUTPUT kINT32 tf_op_layer_ArgMax 24 min: 0 opt: 0 Max: 0
2 OUTPUT kFLOAT tf_op_layer_Max 24 min: 0 opt: 0 Max: 0

ERROR: …/nvdsinfer/nvdsinfer_func_utils.cpp:33 [TRT]: INVALID_ARGUMENT: Cannot find binding of given name: output_bbox/BiasAdd
0:00:01.160884183 20828 0x565053f27290 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger: NvDsInferContext[UID 3]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:1669> [UID = 3]: Could not find output layer ‘output_bbox/BiasAdd’ in engine
ERROR: …/nvdsinfer/nvdsinfer_func_utils.cpp:33 [TRT]: INVALID_ARGUMENT: Cannot find binding of given name: output_cov/Sigmoid
0:00:01.160896486 20828 0x565053f27290 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger: NvDsInferContext[UID 3]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:1669> [UID = 3]: Could not find output layer ‘output_cov/Sigmoid’ in engine
0:00:01.160905523 20828 0x565053f27290 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 3]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1805> [UID = 3]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-5.0/sources/deepstream_lpr_app/models/LP/LPR/lpr_us_onnx_b16.engine
0:00:01.161806970 20828 0x565053f27290 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus: [UID 3]: Load new model:lpr_config_sgie_us.txt sucessfully
0:00:01.161955908 20828 0x565053f27290 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1715> [UID = 2]: Trying to create engine from model files
INFO: …/nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: Reading Calibration Cache for calibrator: EntropyCalibration2
INFO: …/nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: Generated calibration scales using calibration cache. Make sure that calibration cache has latest scales.
INFO: …/nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: To regenerate calibration cache, please delete the existing one. TensorRT will generate a new calibration cache.
WARNING: …/nvdsinfer/nvdsinfer_func_utils.cpp:36 [TRT]: Missing dynamic range for tensor output_bbox/BiasAdd, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
WARNING: …/nvdsinfer/nvdsinfer_func_utils.cpp:36 [TRT]: Missing dynamic range for tensor output_cov/BiasAdd, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
WARNING: …/nvdsinfer/nvdsinfer_func_utils.cpp:36 [TRT]: Missing dynamic range for tensor output_cov/Sigmoid, expect fall back to non-int8 implementation for any layer consuming or producing given tensor
INFO: …/nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.
INFO: …/nvdsinfer/nvdsinfer_func_utils.cpp:39 [TRT]: Detected 1 inputs and 2 output network tensors.
0:00:08.367078506 20828 0x565053f27290 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1748> [UID = 2]: serialize cuda engine to file: /opt/nvidia/deepstream/deepstream-5.0/sources/deepstream_lpr_app/models/LP/LPD/usa_pruned.etlt_b16_gpu0_int8.engine successfully
INFO: …/nvdsinfer/nvdsinfer_model_builder.cpp:685 [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_1 3x480x640
1 OUTPUT kFLOAT output_bbox/BiasAdd 4x30x40
2 OUTPUT kFLOAT output_cov/Sigmoid 1x30x40

0:00:08.370937526 20828 0x565053f27290 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus: [UID 2]: Load new model:lpd_us_config.txt sucessfully
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvdcf.so
gstnvtracker: Batch processing is ON
gstnvtracker: Past frame output is OFF
!! [WARNING][NvDCF] Unknown param found: minMatchingScore4Motion
!! [WARNING][NvDCF] Unknown param found: matchingScoreWeight4Motion
[NvDCF] Initialized
0:00:08.626409892 20828 0x565053f27290 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1701> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-5.0/sources/deepstream_lpr_app/models/tlt_pretrained_models/trafficcamnet/resnet18_trafficcamnet_pruned.etlt_b1_gpu0_int8.engine
INFO: …/nvdsinfer/nvdsinfer_model_builder.cpp:685 [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_1 3x544x960
1 OUTPUT kFLOAT output_bbox/BiasAdd 16x34x60
2 OUTPUT kFLOAT output_cov/Sigmoid 4x34x60

0:00:08.626502055 20828 0x565053f27290 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1805> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-5.0/sources/deepstream_lpr_app/models/tlt_pretrained_models/trafficcamnet/resnet18_trafficcamnet_pruned.etlt_b1_gpu0_int8.engine
0:00:08.628460399 20828 0x565053f27290 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus: [UID 1]: Load new model:trafficamnet_config.txt sucessfully
Running…
qtdemux pad video/x-h264
h264parser already linked. Ignoring.
cuGraphicsGLRegisterBuffer failed with error(219) gst_eglglessink_cuda_init texture = 1
Frame Number = 0 Vehicle Count = 0 Person Count = 0 License Plate Count = 0
Frame Number = 1 Vehicle Count = 0 Person Count = 0 License Plate Count = 0
0:00:08.881477561 20828 0x565053d675e0 WARN nvinfer gstnvinfer.cpp:1975:gst_nvinfer_output_loop: error: Internal data stream error.
0:00:08.881500083 20828 0x565053d675e0 WARN nvinfer gstnvinfer.cpp:1975:gst_nvinfer_output_loop: error: streaming stopped, reason not-negotiated (-4)
ERROR from element secondary-infer-engine2: Internal data stream error.
Error details: gstnvinfer.cpp(1975): gst_nvinfer_output_loop (): /GstPipeline:pipeline/GstNvInfer:secondary-infer-engine2:
streaming stopped, reason not-negotiated (-4)
Returned, stopping playback
0:00:08.886638446 20828 0x565053d67850 WARN nvinfer gstnvinfer.cpp:1975:gst_nvinfer_output_loop: error: Internal data stream error.
0:00:08.886652773 20828 0x565053d67850 WARN nvinfer gstnvinfer.cpp:1975:gst_nvinfer_output_loop: error: streaming stopped, reason not-negotiated (-4)
[NvDCF] De-initialized
Average fps 0.000000
Totally 0 plates are inferred
Deleting pipeline

please follow this to setup display, Deepstream/FAQ - eLinux.org 5.1 or you can use fakesink which is 2, or output to file which is 1.