Unable to create Nvv4l2 Decoder

Hi,
Here is my configuration:
Ubuntu 18.04
Nvidia GPU: GeForce GTX 2080Ti
Driver Version: 440.33.01
Cuda: 10.2
TensorRT: 7.0.0.11
CuDNN: 7.6.5
DeepStream: 5.0

I tried to run

python deepstream_test_1.py /opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.h264
and output is:
Creating Pipeline

Creating Source 
 
Creating H264Parser 

Creating Decoder 

 Unable to create Nvv4l2 Decoder 
 Unable to create NvStreamMux 
 Unable to create pgie 
 Unable to create nvvidconv 
 Unable to create nvosd 
Creating EGLSink 

 Unable to create egl sink 
Playing file /opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.h264 
Traceback (most recent call last):
  File "deepstream_test_1.py", line 266, in <module>
    sys.exit(main(sys.argv))
  File "deepstream_test_1.py", line 199, in main
    streammux.set_property('width', 1920)
AttributeError: 'NoneType' object has no attribute 'set_property'

How can I fix it?

Can you run “gst-inspect-1.0 nvstreammux” command to check whether there is nvstreammux plugin in your platform?

Thank you for replying.
Output is:
No such element or plugin ‘nvstreammux’.
How to install this plugin.

Before starting to use deepstream, please read the deepstream document carefully.

I have reinstalled deepstream with document, still not work.
When running: deepstream-app --version-all
Output is:
deepstream-app version 5.0.0
DeepStreamSDK 5.0.0
CUDA Driver Version: 10.2
CUDA Runtime Version: 10.2
TensorRT Version: 7.0
cuDNN Version: 7.6
Dewarper: not found

Are all drivers such as CUDA, TensorRT, CuDNN, … correct? What is your GPU driver version?

Ubuntu 18.04
Nvidia GPU: GeForce GTX 2080Ti
Driver Version: 440.33.01
Cuda: 10.2
TensorRT: 7.0.0.11
CuDNN: 7.6.5
DeepStream: 5.0
Should I update the GPU driver version?

You can refer to https://collabnix.com/introducing-new-docker-cli-api-support-for-nvidia-gpus-under-docker-engine-19-03-0-beta-release/

I have installed docker and run python3 deepstream_test_1.py /opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.h264
and come across another error:
Creating Pipeline

Creating Source

Creating H264Parser

Creating Decoder

Creating EGLSink

Playing file /opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.h264
Adding elements to Pipeline

Linking elements in the Pipeline

Starting pipeline

WARNING: …/nvdsinfer/nvdsinfer_func_utils.cpp:36 [TRT]: Current optimization profile is: 0. Please ensure there are no enqueued operations pending in this context prior to switching profiles
0:00:01.591205956 90 0x110ea230 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1701> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-5.0/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
INFO: …/nvdsinfer/nvdsinfer_model_builder.cpp:685 [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_1 3x368x640
1 OUTPUT kFLOAT conv2d_bbox 16x23x40
2 OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40

0:00:01.591248235 90 0x110ea230 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1805> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-5.0/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
0:00:01.591893184 90 0x110ea230 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus: [UID 1]: Load new model:dstest1_pgie_config.txt sucessfully
Frame Number=0 Number of Objects=6 Vehicle_count=4 Person_count=2
0:00:01.768476748 90 0x27f9e30 WARN nvinfer gstnvinfer.cpp:1975:gst_nvinfer_output_loop: error: Internal data stream error.
0:00:01.768492628 90 0x27f9e30 WARN nvinfer gstnvinfer.cpp:1975:gst_nvinfer_output_loop: error: streaming stopped, reason not-negotiated (-4)
Error: gst-stream-error-quark: Internal data stream error. (1): gstnvinfer.cpp(1975): gst_nvinfer_output_loop (): /GstPipeline:pipeline0/GstNvInfer:primary-inference:
streaming stopped, reason not-negotiated (-4)
Frame Number=1 Number of Objects=6 Vehicle_count=4 Person_count=2

Can you refer to this instruction of dumping gstreamer pipeline graph DeepStream SDK FAQ - Intelligent Video Analytics / DeepStream SDK - NVIDIA Developer Forums and dump your graph for checking? The name of the dot file may be different but it is the dot file which can be converted in to png picture.