Deepstream pipeline waits for input indefinitely

• Hardware Platform (Jetson / GPU) GeForce RTX 3090
• DeepStream Version 6.1.0
• TensorRT Version 8.2.5.1-1+cuda11.4
• NVIDIA GPU Driver Version (valid for GPU only) 470.103.01
• Issue Type( questions, new requirements, bugs) questions
• CUDA Version : 11.4
• CUDNN Version : 8.4.0.27-1+cuda11.6
• Operating System : Ubuntu 20.04
• Python Version : 3.8.10 (default, Mar 15 2022, 12:22:08) [GCC 9.4.0]
• Tensorflow Version : 2.7.0

Hello, I’m trying to create a pipeline using python. Here is my code
This code is basically a copy-paste of the deepstream-test1 app but with a fake sink running in the ipython notebook.

Here is the pgie config file that I tried the first time (this is the original pgie config):

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-file=../../../../samples/models/Primary_Detector/resnet10.caffemodel
proto-file=../../../../samples/models/Primary_Detector/resnet10.prototxt
model-engine-file=../../../../samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
labelfile-path=../../../../samples/models/Primary_Detector/labels.txt
int8-calib-file=../../../../samples/models/Primary_Detector/cal_trt.bin
force-implicit-batch-dim=1
batch-size=1
network-mode=1
num-detected-classes=4
interval=0
gie-unique-id=1
output-blob-names=conv2d_bbox;conv2d_cov/Sigmoid
#scaling-filter=0
#scaling-compute-hw=0

[class-attrs-all]
pre-cluster-threshold=0.2
eps=0.2
group-threshold=1

Here are the jupyter logs:

0:00:02.099308021 135021      0x39fcb50 WARN                 nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1161> [UID = 1]: Warning, OpenCV has been deprecated. Using NMS for clustering instead of cv::groupRectangles with topK = 20 and NMS Threshold = 0.5
0:00:03.298661159 135021      0x39fcb50 INFO                 nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.1/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
0:00:03.316316948 135021      0x39fcb50 INFO                 nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2003> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.1/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
0:00:03.317012439 135021      0x39fcb50 INFO                 nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<primary-inference> [UID 1]: Load new model:/home/mher/projects/counter/deepstream-analytics/dsnvanalytics_pgie_config.txt sucessfully

It has been stuck like this for a while now. When I keyboard interrupt, this is what comes up:

Starting pipeline 

---------------------------------------------------------------------------
KeyboardInterrupt                         Traceback (most recent call last)
<ipython-input-19-f021ae739399> in <module>
      3 pipeline.set_state(Gst.State.PLAYING)
      4 try:
----> 5     loop.run()
      6 except Exception as e:
      7     print(e)

/usr/lib/python3/dist-packages/gi/overrides/GLib.py in run(self)
    495         with register_sigint_fallback(self.quit):
    496             with wakeup_on_signal():
--> 497                 super(MainLoop, self).run()
    498 
    499 

/usr/lib/python3.8/contextlib.py in __exit__(self, type, value, traceback)
    118         if type is None:
    119             try:
--> 120                 next(self.gen)
    121             except StopIteration:
    122                 return False

/usr/lib/python3/dist-packages/gi/_ossighelper.py in register_sigint_fallback(callback)
    249     finally:
    250         if _sigint_called:
--> 251             signal.default_int_handler(signal.SIGINT, None)
    252         else:
    253             _callback_stack.pop()

KeyboardInterrupt: 

From this I can conclude that the pipeline is waiting for a signal (I assume this is the input signal). I am new to Deepstream and need assistance, any help would be much appreciated.

By the way, the same thing happens when I try to run the app the normal way using python3 deepstream-test1.py sample_h264.mp4

deepstream_test_1 has h264parse in its pipeline, it cannot work with mp4 format.
you can use the demo h264 format file such as /opt/nvidia/deepstream/deepstream/samples/streams/sample_qHD.h264 .

Did as you suggested, no difference

Do you mean you can not run the original sample deepstream-test1.py with the following command?

python3 deepstream_test_1.py /opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.h264

Theres no monitor on the device, thats why I was trying to make it work with a fake sink or a file sink

wsadmin@AIML1001:/opt/nvidia/deepstream/deepstream-6.1/sources/deepstream_python_apps/apps/deepstream-test1$ export GST_DEBUG=3
wsadmin@AIML1001:/opt/nvidia/deepstream/deepstream-6.1/sources/deepstream_python_apps/apps/deepstream-test1$ sudo python3 deepstream_test_1.py /opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.h264
Creating Pipeline

Creating Source

Creating H264Parser

Creating Decoder

Creating EGLSink

Playing file /opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.h264
Adding elements to Pipeline

Linking elements in the Pipeline

Starting pipeline

0:00:00.206939123 140673      0x58b7f50 WARN                 nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1161> [UID = 1]: Warning, OpenCV has been deprecated. Using NMS for clustering instead of cv::groupRectangles with topK = 20 and NMS Threshold = 0.5
0:00:01.401293821 140673      0x58b7f50 INFO                 nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.1/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input_1         3x368x640
1   OUTPUT kFLOAT conv2d_bbox     16x23x40
2   OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40

0:00:01.415318963 140673      0x58b7f50 INFO                 nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2003> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.1/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
0:00:01.416020096 140673      0x58b7f50 INFO                 nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<primary-inference> [UID 1]: Load new model:dstest1_pgie_config.txt sucessfully
Frame Number=0 Number of Objects=13 Vehicle_count=9 Person_count=4
0:00:01.549834157 140673      0x38dd6a0 WARN                 nvinfer gstnvinfer.cpp:2299:gst_nvinfer_output_loop:<primary-inference> error: Internal data stream error.
0:00:01.549845609 140673      0x38dd6a0 WARN                 nvinfer gstnvinfer.cpp:2299:gst_nvinfer_output_loop:<primary-inference> error: streaming stopped, reason not-negotiated (-4)
Frame Number=1 Number of Objects=11 Vehicle_count=8 Person_count=3
Error: gst-stream-error-quark: Internal data stream error. (1): gstnvinfer.cpp(2299): gst_nvinfer_output_loop (): /GstPipeline:pipeline0/GstNvInfer:primary-inference:
streaming stopped, reason not-negotiated (-4)

And here are the versions:

wsadmin@AIML1001:/opt/nvidia/deepstream/deepstream-6.1/sources/deepstream_python_apps/apps/deepstream-test1$ dpkg --list | grep cud
ii  caffe-cuda                                                  1.0.0+git20200212.9b89154-0lambda2.4  amd64        Deep learning framework Caffe accelerated by CUDA (metapackage)
ii  caffe-tools-cuda                                            1.0.0+git20200212.9b89154-0lambda2.4  amd64        Fast and open framework for deep learning with CUDA (command line tools)
ii  cuda-cccl-11-6                                              11.6.55-1                             amd64        CUDA CCCL
ii  cuda-cccl-11-7                                              11.7.58-1                             amd64        CUDA CCCL
ii  cuda-cudart-11-1                                            11.1.74-1                             amd64        CUDA Runtime native Libraries
ii  cuda-cudart-11-6                                            11.6.55-1                             amd64        CUDA Runtime native Libraries
ii  cuda-cudart-11-7                                            11.7.60-1                             amd64        CUDA Runtime native Libraries
ii  cuda-cudart-dev-11-1                                        11.1.74-1                             amd64        CUDA Runtime native dev links, headers
ii  cuda-cudart-dev-11-6                                        11.6.55-1                             amd64        CUDA Runtime native dev links, headers
ii  cuda-cudart-dev-11-7                                        11.7.60-1                             amd64        CUDA Runtime native dev links, headers
ii  cuda-driver-dev-11-1                                        11.1.74-1                             amd64        CUDA Driver native dev stub library
ii  cuda-driver-dev-11-6                                        11.6.55-1                             amd64        CUDA Driver native dev stub library
ii  cuda-driver-dev-11-7                                        11.7.60-1                             amd64        CUDA Driver native dev stub library
ii  cuda-nvcc-11-1                                              11.1.105-1                            amd64        CUDA nvcc
ii  cuda-toolkit-11-6-config-common                             11.6.55-1                             all          Common config package for CUDA Toolkit 11.6.
ii  cuda-toolkit-11-7-config-common                             11.7.60-1                             all          Common config package for CUDA Toolkit 11.7.
ii  cuda-toolkit-11-config-common                               11.7.60-1                             all          Common config package for CUDA Toolkit 11.
ii  cuda-toolkit-config-common                                  11.7.60-1                             all          Common config package for CUDA Toolkit.
ii  cudnn-license                                               8.2.1-0lambda1                        all          NVIDIA CUDA deep neural network, run-time libraries (license)
ii  lambda-stack-cuda                                           0.1.12~20.04.3                        all          Deep learning software stack from Lambda Labs (CUDA)
ii  libcaffe-cuda1:amd64                                        1.0.0+git20200212.9b89154-0lambda2.4  amd64        Fast and open framework for deep learning with CUDA (shared library)
ii  libcudart11.0:amd64                                         11.1.1-0lambda2                       amd64        CUDA runtime library
ii  libcudnn8                                                   8.4.0.27-1+cuda11.6                   amd64        cuDNN runtime libraries
ii  libcudnn8-dev                                               8.4.0.27-1+cuda11.6                   amd64        cuDNN development libraries and headers
ii  libnvinfer-bin                                              8.2.5-1+cuda11.4                      amd64        TensorRT binaries
ii  libnvinfer-dev                                              8.2.5-1+cuda11.4                      amd64        TensorRT development libraries and headers
ii  libnvinfer-doc                                              8.2.5-1+cuda11.4                      all          TensorRT documentation
ii  libnvinfer-plugin-dev                                       8.2.5-1+cuda11.4                      amd64        TensorRT plugin libraries
ii  libnvinfer-plugin8                                          8.2.5-1+cuda11.4                      amd64        TensorRT plugin libraries
ii  libnvinfer-samples                                          8.2.5-1+cuda11.4                      all          TensorRT samples
ii  libnvinfer8                                                 8.2.5-1+cuda11.4                      amd64        TensorRT runtime libraries
ii  libnvonnxparsers-dev                                        8.2.5-1+cuda11.4                      amd64        TensorRT ONNX libraries
ii  libnvonnxparsers8                                           8.2.5-1+cuda11.4                      amd64        TensorRT ONNX libraries
ii  libnvparsers-dev                                            8.2.5-1+cuda11.4                      amd64        TensorRT parsers libraries
ii  libnvparsers8                                               8.2.5-1+cuda11.4                      amd64        TensorRT parsers libraries
ii  nv-tensorrt-repo-ubuntu2004-cuda11.4-trt8.2.5.1-ga-20220505 1-1                                   amd64        nv-tensorrt repository configuration files
ii  nvidia-cuda-dev:amd64                                       11.1.1-0lambda2                       amd64        CUDA development files
ii  nvidia-cuda-doc                                             11.1.1-0lambda2                       all          CUDA toolkit documentation
ii  nvidia-cuda-gdb                                             11.1.1-0lambda2                       amd64        CUDA Debugger
ii  nvidia-cuda-toolkit                                         11.1.1-0lambda2                       amd64        CUDA development toolkit
ii  python3-caffe-cuda                                          1.0.0+git20200212.9b89154-0lambda2.4  amd64        Fast and open framework for deep learning with CUDA (Python 3)
ii  python3-pycuda                                              2019.1.2+dfsg-0lambda2                amd64        Easy, Pythonic access to NVIDIA CUDA parallel computation API (Python 3)
ii  python3-skcuda                                              0.5.3-0lambda2                        amd64        Python interface to GPU-powered libraries (Python 3)
ii  python3-tensorflow-cuda                                     2.7.0-0lambda1                        amd64        Open-source software library for Machine Intelligence (Python 3, CUDA)
ii  python3-torch-cuda                                          1.10.1+ds-0lambda1                    amd64        Tensors and Dynamic neural networks GPU accelerated (Python 3)
ii  python3-torchvision-cuda                                    0.11.2-0lambda1                       amd64        Image and video datasets and models for PyTorch (Python 3, CUDA)
ii  tensorrt                                                    8.2.5.1-1+cuda11.4                    amd64        Meta package of TensorRT

Can you enable debug log by “export GST_DEBUG=3” before run the case?

Is there monitor connected t your device? Can you check the CUDA version in your device, cuda11.6 is required for DeepStream 6.1. Quickstart Guide — DeepStream 6.1.1 Release documentation

Edited the post above to include versions

It appears to me that some of the packages are of 11.6, and some are 11.4, should i uninstall ds 6.1 and instead install 6.0?

Can you run “nvidia-smi”? Or you can run “nvcc -V”

Output of nvidia-smi

Thu Jun  2 10:18:29 2022
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 470.129.06   Driver Version: 470.129.06   CUDA Version: 11.4     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA GeForce ...  Off  | 00000000:29:00.0 Off |                  N/A |
| 30%   28C    P8    12W / 350W |   1047MiB / 24268MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
|   1  NVIDIA GeForce ...  Off  | 00000000:41:00.0 Off |                  N/A |
| 30%   29C    P8    14W / 350W |      2MiB / 24268MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
|   2  NVIDIA GeForce ...  Off  | 00000000:61:00.0 Off |                  N/A |
| 30%   23C    P8     8W / 350W |      2MiB / 24268MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|    0   N/A  N/A    137389      C   /usr/bin/python3                  983MiB |
+-----------------------------------------------------------------------------+

Output of nvcc -V:

nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2020 NVIDIA Corporation
Built on Mon_Oct_12_20:09:46_PDT_2020
Cuda compilation tools, release 11.1, V11.1.105
Build cuda_11.1.TC455_06.29190527_0

Please reinstall you system according to Quickstart Guide — DeepStream 6.1.1 Release documentation

I want to avoid installing drivers, will running everything in a docker work?

Edit:
Nevermind, im upgrading the cuda and drivers, will get back to you once I reinstall everything

No. The driver must be “NVIDIA driver 510.47.03”

I installed the new versions of everything, here is the output when I run sudo python3 deepstream_test_1.py /opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.h264

wsadmin@AIML1001:/opt/nvidia/deepstream/deepstream-6.1/sources/deepstream_python_apps/apps/deepstream-test1$ sudo python
3 deepstream_test_1.py /opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.h264

(gst-plugin-scanner:6831): GStreamer-WARNING **: 12:31:24.399: Failed to load plugin '/usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_udp.so': librivermax.so.0: cannot open shared object file: No such file or directory

(gst-plugin-scanner:6831): GStreamer-WARNING **: 12:31:24.400: Failed to load plugin '/usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_inferserver.so': libtritonserver.so: cannot open shared object file: No such file or directory

(gst-plugin-scanner:6831): GStreamer-WARNING **: 12:31:24.401: Failed to load plugin '/usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_ucx.so': libucs.so.0: cannot open shared object file: No such file or directory
Creating Pipeline

Creating Source

Creating H264Parser

Creating Decoder

Creating EGLSink

Playing file /opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.h264
Adding elements to Pipeline

Linking elements in the Pipeline

Starting pipeline

0:00:00.588110988  6830      0x3ed6550 WARN                 nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1161> [UID = 1]: Warning, OpenCV has been deprecated. Using NMS for clustering instead of cv::groupRectangles with topK = 20 and NMS Threshold = 0.5
WARNING: ../nvdsinfer/nvdsinfer_model_builder.cpp:1482 Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-6.1/sources/deepstream_python_apps/apps/deepstream-test1/../../../../samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine open error
0:00:01.188124237  6830      0x3ed6550 WARN                 nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1888> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-6.1/sources/deepstream_python_apps/apps/deepstream-test1/../../../../samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine failed
0:00:01.201985898  6830      0x3ed6550 WARN                 nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1993> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-6.1/sources/deepstream_python_apps/apps/deepstream-test1/../../../../samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine failed, try rebuild
0:00:01.202008440  6830      0x3ed6550 INFO                 nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 1]: Trying to create engine from model files
0:00:21.655874454  6830      0x3ed6550 INFO                 nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1946> [UID = 1]: serialize cuda engine to file: /opt/nvidia/deepstream/deepstream-6.1/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine successfully
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input_1         3x368x640
1   OUTPUT kFLOAT conv2d_bbox     16x23x40
2   OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40

0:00:21.673194598  6830      0x3ed6550 INFO                 nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<primary-inference> [UID 1]: Load new model:dstest1_pgie_config.txt sucessfully
Frame Number=0 Number of Objects=13 Vehicle_count=9 Person_count=4
0:00:21.813124384  6830      0x1f3e6a0 WARN                 nvinfer gstnvinfer.cpp:2299:gst_nvinfer_output_loop:<primary-inference> error: Internal data stream error.
0:00:21.813135485  6830      0x1f3e6a0 WARN                 nvinfer gstnvinfer.cpp:2299:gst_nvinfer_output_loop:<primary-inference> error: streaming stopped, reason not-negotiated (-4)
Error: gst-stream-error-quark: Internal data stream error. (1): gstnvinfer.cpp(2299): gst_nvinfer_output_loop (): /GstPipeline:pipeline0/GstNvInfer:primary-inference:
streaming stopped, reason not-negotiated (-4)
Frame Number=1 Number of Objects=11 Vehicle_count=8 Person_count=3

Here is the output of nvidia-smi

Thu Jun  2 12:34:58 2022
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 510.73.05    Driver Version: 510.73.05    CUDA Version: 11.6     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA GeForce ...  On   | 00000000:29:00.0 Off |                  N/A |
| 30%   35C    P8    23W / 350W |      5MiB / 24576MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
|   1  NVIDIA GeForce ...  On   | 00000000:41:00.0 Off |                  N/A |
| 30%   34C    P8    26W / 350W |      5MiB / 24576MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
|   2  NVIDIA GeForce ...  On   | 00000000:61:00.0 Off |                  N/A |
| 30%   28C    P8    20W / 350W |     19MiB / 24576MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|    0   N/A  N/A      1750      G   /usr/lib/xorg/Xorg                  4MiB |
|    1   N/A  N/A      1750      G   /usr/lib/xorg/Xorg                  4MiB |
|    2   N/A  N/A      1750      G   /usr/lib/xorg/Xorg                  9MiB |
|    2   N/A  N/A      1867      G   /usr/bin/gnome-shell                8MiB |
+-----------------------------------------------------------------------------+

And here is the output of nvcc -V:

nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2022 NVIDIA Corporation
Built on Tue_Mar__8_18:18:20_PST_2022
Cuda compilation tools, release 11.6, V11.6.124
Build cuda_11.6.r11.6/compiler.31057947_0

Here is the output of dpkg --list | grep cud

ii  caffe-cuda                                                  1.0.0+git20200212.9b89154-0lambda3  amd64        Deep learning framework Caffe accelerated by CUDA (metapackage)
ii  caffe-tools-cuda                                            1.0.0+git20200212.9b89154-0lambda3  amd64        Fast and open framework for deep learning with CUDA (command line tools)
ii  cuda-cccl-11-6                                              11.6.55-1                           amd64        CUDA CCCL
ii  cuda-cccl-11-7                                              11.7.58-1                           amd64        CUDA CCCL
ii  cuda-cudart-11-1                                            11.1.74-1                           amd64        CUDA Runtime native Libraries
ii  cuda-cudart-11-6                                            11.6.55-1                           amd64        CUDA Runtime native Libraries
ii  cuda-cudart-11-7                                            11.7.60-1                           amd64        CUDA Runtime native Libraries
ii  cuda-cudart-dev-11-1                                        11.1.74-1                           amd64        CUDA Runtime native dev links, headers
ii  cuda-cudart-dev-11-6                                        11.6.55-1                           amd64        CUDA Runtime native dev links, headers
ii  cuda-cudart-dev-11-7                                        11.7.60-1                           amd64        CUDA Runtime native dev links, headers
ii  cuda-driver-dev-11-1                                        11.1.74-1                           amd64        CUDA Driver native dev stub library
ii  cuda-driver-dev-11-6                                        11.6.55-1                           amd64        CUDA Driver native dev stub library
ii  cuda-driver-dev-11-7                                        11.7.60-1                           amd64        CUDA Driver native dev stub library
ii  cuda-nvcc-11-1                                              11.1.105-1                          amd64        CUDA nvcc
ii  cuda-toolkit-11-6-config-common                             11.6.55-1                           all          Common config package for CUDA Toolkit 11.6.
ii  cuda-toolkit-11-7-config-common                             11.7.60-1                           all          Common config package for CUDA Toolkit 11.7.
ii  cuda-toolkit-11-config-common                               11.7.60-1                           all          Common config package for CUDA Toolkit 11.
ii  cuda-toolkit-config-common                                  11.7.60-1                           all          Common config package for CUDA Toolkit.
ii  cudnn-license                                               8.3.3.40-0lambda1                   all          NVIDIA CUDA deep neural network, run-time libraries (license)
ii  lambda-stack-cuda                                           0.1.12~20.04.4                      all          Deep learning software stack from Lambda Labs (CUDA)
ii  libcaffe-cuda1:amd64                                        1.0.0+git20200212.9b89154-0lambda3  amd64        Fast and open framework for deep learning with CUDA (shared library)
ii  libcudart11.0:amd64                                         11.6.55~11.6.2-0lambda1             amd64        NVIDIA CUDA Runtime Library
ii  libcudnn8                                                   8.4.1.50-1+cuda11.6                 amd64        cuDNN runtime libraries
ii  libcudnn8-dev                                               8.4.1.50-1+cuda11.6                 amd64        cuDNN development libraries and headers
ii  libnvinfer-bin                                              8.2.5-1+cuda11.4                    amd64        TensorRT binaries
ii  libnvinfer-dev                                              8.2.5-1+cuda11.4                    amd64        TensorRT development libraries and headers
ii  libnvinfer-doc                                              8.2.5-1+cuda11.4                    all          TensorRT documentation
ii  libnvinfer-plugin-dev                                       8.2.5-1+cuda11.4                    amd64        TensorRT plugin libraries
ii  libnvinfer-plugin8                                          8.2.5-1+cuda11.4                    amd64        TensorRT plugin libraries
ii  libnvinfer-samples                                          8.2.5-1+cuda11.4                    all          TensorRT samples
ii  libnvinfer8                                                 8.2.5-1+cuda11.4                    amd64        TensorRT runtime libraries
ii  libnvonnxparsers-dev                                        8.2.5-1+cuda11.4                    amd64        TensorRT ONNX libraries
ii  libnvonnxparsers8                                           8.2.5-1+cuda11.4                    amd64        TensorRT ONNX libraries
ii  libnvparsers-dev                                            8.2.5-1+cuda11.4                    amd64        TensorRT parsers libraries
ii  libnvparsers8                                               8.2.5-1+cuda11.4                    amd64        TensorRT parsers libraries
ii  nv-tensorrt-repo-ubuntu2004-cuda11.4-trt8.2.5.1-ga-20220505 1-1                                 amd64        nv-tensorrt repository configuration files
ii  nvidia-cuda-dev:amd64                                       11.6.2-0lambda1                     amd64        NVIDIA CUDA development files
ii  nvidia-cuda-doc                                             11.1.1-0lambda2                     all          CUDA toolkit documentation
ii  nvidia-cuda-gdb                                             11.6.124~11.6.2-0lambda1            amd64        NVIDIA CUDA Debugger (GDB)
ii  nvidia-cuda-toolkit                                         11.6.2-0lambda1                     amd64        NVIDIA CUDA development toolkit
ii  nvidia-cuda-toolkit-doc                                     11.6.2-0lambda1                     all          NVIDIA CUDA and OpenCL documentation
ii  python3-caffe-cuda                                          1.0.0+git20200212.9b89154-0lambda3  amd64        Fast and open framework for deep learning with CUDA (Python 3)
ii  python3-pycuda                                              2019.1.2+dfsg-0lambda2              amd64        Easy, Pythonic access to NVIDIA CUDA parallel computation API (Python 3)
ii  python3-skcuda                                              0.5.3-0lambda2                      amd64        Python interface to GPU-powered libraries (Python 3)
ii  python3-tensorflow-cuda                                     2.8.0-0lambda1                      amd64        Open-source software library for Machine Intelligence (Python 3, CUDA)
ii  python3-torch-cuda                                          1.11.0+ds-0lambda1                  amd64        Tensors and Dynamic neural networks GPU accelerated (Python 3)
ii  python3-torchvision-cuda                                    0.12.0-0lambda1                     amd64        Image and video datasets and models for PyTorch (Python 3, CUDA)
ii  tensorrt                                                    8.2.5.1-1+cuda11.4                  amd64        Meta package of TensorRT

Please answer the questions

As I said, there is no screen connected to the computer that is running the deepstream application

Output after export GST_DEBUG=3 and running sudo python3 deepstream_test_1.py /opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.h264

Creating Pipeline

Creating Source

Creating H264Parser

Creating Decoder

Creating EGLSink

Playing file /opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.h264
Adding elements to Pipeline

Linking elements in the Pipeline

Starting pipeline

0:00:00.207596980  6875      0x41e5d50 WARN                 nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1161> [UID = 1]: Warning, OpenCV has been deprecated. Using NMS for clustering instead of cv::groupRectangles with topK = 20 and NMS Threshold = 0.5
0:00:01.403619906  6875      0x41e5d50 INFO                 nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.1/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input_1         3x368x640
1   OUTPUT kFLOAT conv2d_bbox     16x23x40
2   OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40

0:00:01.416618044  6875      0x41e5d50 INFO                 nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2003> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.1/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
0:00:01.417206466  6875      0x41e5d50 INFO                 nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<primary-inference> [UID 1]: Load new model:dstest1_pgie_config.txt sucessfully
Frame Number=0 Number of Objects=13 Vehicle_count=9 Person_count=4
0:00:01.552688747  6875      0x21feaa0 WARN                 nvinfer gstnvinfer.cpp:2299:gst_nvinfer_output_loop:<primary-inference> error: Internal data stream error.
0:00:01.552698445  6875      0x21feaa0 WARN                 nvinfer gstnvinfer.cpp:2299:gst_nvinfer_output_loop:<primary-inference> error: streaming stopped, reason not-negotiated (-4)
Frame Number=1 Number of Objects=11 Vehicle_count=8 Person_count=3
Error: gst-stream-error-quark: Internal data stream error. (1): gstnvinfer.cpp(2299): gst_nvinfer_output_loop (): /GstPipeline:pipeline0/GstNvInfer:primary-inference:
streaming stopped, reason not-negotiated (-4)

The sample requires a screen to display.

Please replace the “nveglglessink” with “fakesink” in the code

That worked! Thank you.

Here is the output:

wsadmin@AIML1001:/opt/nvidia/deepstream/deepstream-6.1/sources/deepstream_python_apps/apps/deepstream-test1$ sudo python3 deepstream_test_1.py /opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.h264
Creating Pipeline

Creating Source

Creating H264Parser

Creating Decoder

Creating fakesink

Playing file /opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.h264
Adding elements to Pipeline

Linking elements in the Pipeline

Starting pipeline

0:00:00.192215606  6936      0x270cc00 WARN                 nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1161> [UID = 1]: Warning, OpenCV has been deprecated. Using NMS for clustering instead of cv::groupRectangles with topK = 20 and NMS Threshold = 0.5
0:00:01.390115251  6936      0x270cc00 INFO                 nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.1/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input_1         3x368x640
1   OUTPUT kFLOAT conv2d_bbox     16x23x40
2   OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40

0:00:01.403458398  6936      0x270cc00 INFO                 nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2003> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.1/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
0:00:01.404020093  6936      0x270cc00 INFO                 nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<primary-inference> [UID 1]: Load new model:dstest1_pgie_config.txt sucessfully
Frame Number=0 Number of Objects=13 Vehicle_count=9 Person_count=4
Frame Number=1 Number of Objects=11 Vehicle_count=8 Person_count=3
Frame Number=2 Number of Objects=11 Vehicle_count=7 Person_count=4
Frame Number=3 Number of Objects=11 Vehicle_count=7 Person_count=4
Frame Number=4 Number of Objects=11 Vehicle_count=8 Person_count=3
...
Frame Number=1437 Number of Objects=13 Vehicle_count=12 Person_count=1
Frame Number=1438 Number of Objects=15 Vehicle_count=14 Person_count=1
Frame Number=1439 Number of Objects=14 Vehicle_count=12 Person_count=2
Frame Number=1440 Number of Objects=14 Vehicle_count=13 Person_count=1
Frame Number=1441 Number of Objects=0 Vehicle_count=0 Person_count=0
End-of-stream

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.