Streaming stopped, reason not-linked (-1) while using deepstream-app2

Please provide complete information as applicable to your setup.

• DeepStream Version - 5.1
• NVIDIA GPU Driver Version (valid for GPU only)**
| NVIDIA-SMI 460.39 Driver Version: 460.39 CUDA Version: 11.2 |
|-------------------------------±---------------------±---------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Tesla T4 Off | 00000000:27:00.0 Off | 0 |
| N/A 49C P0 28W / 70W | 0MiB / 15109MiB | 0% Default |
| | | N/A |
±------------------------------±---------------------±---------------------+
| 1 Tesla T4 Off | 00000000:83:00.0 Off | 0 |
| N/A 51C P0 35W / 70W | 920MiB / 15109MiB | 0% Default |
| | | N/A |
±------------------------------±---------------------±---------------------+
| 2 Tesla T4 Off | 00000000:A3:00.0 Off | 0 |
| N/A 63C P0 29W / 70W | 1078MiB / 15109MiB | 0% Default |
| | | N/A |
±------------------------------±---------------------±---------------------+
| 3 Tesla T4 Off | 00000000:C3:00.0 Off | 0 |
| N/A 48C P0 28W / 70W | 0MiB / 15109MiB | 5% Default |
| | | N/A |
±------------------------------±---------------------±---------------------+

±----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |

0:02:33.901732630 1660 0x2565ef0 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1749> [UID = 1]: serialize cuda engine to file: /opt/nvidia/deepstream/deepstream-5.1/sources/deepstream_python_apps/apps/age_gender/deepti/testing/Cars/resnet18_trafficcamnet_pruned.etlt_b1_gpu0_int8.engine successfully
INFO: …/nvdsinfer/nvdsinfer_model_builder.cpp:685 [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_1 3x544x960
1 OUTPUT kFLOAT output_bbox/BiasAdd 16x34x60
2 OUTPUT kFLOAT output_cov/Sigmoid 4x34x60

0:02:33.927271740 1660 0x2565ef0 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus: [UID 1]: Load new model:dstest2_pgie_config.txt sucessfully
Decodebin child added: source

Decodebin child added: decodebin0

Decodebin child added: rtph264depay0

Decodebin child added: h264parse0

Decodebin child added: capsfilter0

Decodebin child added: nvv4l2decoder0

In cb_newpad

KLT Tracker Init
0:02:35.327438251 1660 0x1e374a0 WARN nvinfer gstnvinfer.cpp:1812:gst_nvinfer_submit_input_buffer: error: Internal data stream error.
0:02:35.327483626 1660 0x1e374a0 WARN nvinfer gstnvinfer.cpp:1812:gst_nvinfer_submit_input_buffer: error: streaming stopped, reason not-linked (-1)
Error: gst-stream-error-quark: Internal data stream error. (1): gstnvinfer.cpp(1812): gst_nvinfer_submit_input_buffer (): /GstPipeline:pipeline0/GstNvInfer:secondary1-nvinference-engine:
streaming stopped, reason not-linked (-1)

Please run “nvcc -V” to get cuda version, run “dpkg -l |grep TensorRT” to get tensorRT version.

Please tell us the command line you use to run deepstream-test2.

root@a51e51c1c4a2:/opt/nvidia/deepstream/deepstream-5.1# nvcc -V
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2020 NVIDIA Corporation
Built on Tue_Sep_15_19:10:02_PDT_2020
Cuda compilation tools, release 11.1, V11.1.74
Build cuda_11.1.TC455_06.29069683_0
root@a51e51c1c4a2:/opt/nvidia/deepstream/deepstream-5.1# dpkg -l |grep TensorRT
ii graphsurgeon-tf 7.2.1-1+cuda11.1 amd64 GraphSurgeon for TensorRT package
ii libnvinfer-bin 7.2.1-1+cuda11.1 amd64 TensorRT binaries
ii libnvinfer-dev 7.2.1-1+cuda11.1 amd64 TensorRT development libraries and headers
ii libnvinfer-plugin-dev 7.2.1-1+cuda11.1 amd64 TensorRT plugin libraries and headers
ii libnvinfer-plugin7 7.2.1-1+cuda11.1 amd64 TensorRT plugin library
ii libnvinfer7 7.2.1-1+cuda11.1 amd64 TensorRT runtime libraries
ii libnvonnxparsers-dev 7.2.1-1+cuda11.1 amd64 TensorRT ONNX libraries
ii libnvonnxparsers7 7.2.1-1+cuda11.1 amd64 TensorRT ONNX libraries
ii libnvparsers-dev 7.2.1-1+cuda11.1 amd64 TensorRT parsers libraries
ii libnvparsers7 7.2.1-1+cuda11.1 amd64 TensorRT parsers libraries
ii python3-libnvinfer 7.2.1-1+cuda11.1 amd64 Python 3 bindings for TensorRT
ii python3-libnvinfer-dev 7.2.1-1+cuda11.1 amd64 Python 3 development package for TensorRT
ii uff-converter-tf 7.2.1-1+cuda11.1 amd64 UFF converter for TensorRT package

Please tell us the command line you use to run deepstream-test2.