"reason not-negotiated (-4)" python deepstream-app1 is stopped

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
RTX3060(laptop)

• DeepStream Version
6.1.1

• TensorRT Version
8.4.1.5

• NVIDIA GPU Driver Version (valid for GPU only)
525.60.11

• Issue Type( questions, new requirements, bugs)
When I run “deepstram-test-1.py” (“deepstream_python_apps”), I got inference result just 7 frames(from 0 to 6) and “the result window”* is gone on appered.
(*I couldn’t what it contain because it is disappeared soon.)

• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)

Caution :
Sorry, thist article is too long.
Step1 to 7 are “my setting environments”.
Problems I got are start Step8, You can read start to step8.

1. create container, and attach

$ sudo docker create \
--name nds028 \
-it \
-p 8888:8888 \
-e DISPLAY=$DISPLAY \
-v /home/chomsky/workspace/share:/root/sharespace \
-v /tmp/.X11-unix:/tmp/.X11-unix \
--device /dev/video0:/dev/video0 \
--gpus all \
ubuntu:20.04 /bin/bash

and I got below, (Actually I don’t exactly some option for create container. Maybe I missed some options.)

a2354b964bf62021f9e424a50b2566765a2351dda82f2a7ef1550677b724aa48

and for attach

$ sudo docker start a2354b96
$ sudo docker exec -it a2354b96 /bin/bash

2. opencv_test
I know this step is not necessary, I just wanted to check status “$DISPLAY” using some opencv codes(ex, image showing, video showing, webcam showing)

# apt-get update
# apt install python3-pip
# pip install opencv-python
# apt-get install ffmpeg libsm6 libxext6 

3. CUDA installation (11.7)
I checked that deepstream 6.1.1 needs CUDA11.7. (Quickstart Guide — DeepStream 6.1.1 Release documentation)

# wget https://developer.download.nvidia.com/compute/cuda/repos/wsl-ubuntu/x86_64/cuda-wsl-ubuntu.pin&&
# mv cuda-wsl-ubuntu.pin /etc/apt/preferences.d/cuda-repository-pin-600
# wget https://developer.download.nvidia.com/compute/cuda/11.7.0/local_installers/cuda-repo-wsl-ubuntu-11-7-local_11.7.0-1_amd64.deb
# dpkg -i cuda-repo-wsl-ubuntu-11-7-local_11.7.1-1_amd64.deb
# cp /var/cuda-repo-wsl-ubuntu-11-7-local/cuda-96193861-keyring.gpg /usr/share/keyrings/
# apt-get update
# apt-get install cuda

and I got below.

(...)
done.
done.
Processing triggers for fontconfig (2.13.1-2ubuntu3) ...
Processing triggers for mime-support (3.64ubuntu1) ...

4. cudnn installation (8.6.0.163)
I checked that TensorRT 8.4.1 needs cudnn.(Quick Start Guide :: NVIDIA Deep Learning TensorRT Documentation)

(deb file is already have.)

# dpkg -i cudnn-local-repo-ubuntu2004-8.6.0.163_1.0-1_amd64.deb
# cp /var/cudnn-local-repo-ubuntu2004-8.6.0.163/cudnn-local-B0FE0A41-keyring.gpg   /usr/share/keyrings/ -v
# apt-get update
# apt-get install libcudnn8
# apt-get install libcudnn8-dev
# apt-get install libcudnn8-sample

and I got below.

(...)
Unpacking libcudnn8 (8.6.0.163-1+cuda11.8) ...
Setting up libcudnn8 (8.6.0.163-1+cuda11.8) ...

(...)
Unpacking libcudnn8-dev (8.6.0.163-1+cuda11.8) ...
Setting up libcudnn8-dev (8.6.0.163-1+cuda11.8) ...
(...)
Unpacking libcudnn8-samples (8.6.0.163-1+cuda11.8) ...
Setting up libcudnn8-samples (8.6.0.163-1+cuda11.8) ...

and to cudnn test.

# apt-get install libfreeimage3 libfreeimage-dev
# cp -r /usr/src/cudnn_samples_v8/ ./ -v
# cd cudnn_samples_v8/mnistCUDNN
# make clean
# make
# ./mnistCUDNN

and I got below.

(...)
Result of classification: 1 3 5

Test passed!

5. TensorRT installation (8.4.1.5)
I checked that deepstream needs TensorRT 8.4.1.5(Quickstart Guide — DeepStream 6.1.1 Release documentation)

(deb file is already have.)

# dpkg -i nv-tensorrt-repo-ubuntu2004-cuda11.6-trt8.4.1.5-ga-20220604_1-1_amd64.deb
# apt-key add /var/nv-tensorrt-repo-ubuntu2004-cuda11.6-trt8.4.1.5-ga-20220604/9a60d8bf.pub
# apt-get update
# apt-get install tensorrt -y
# python3 -m pip install numpy
# apt-get install python3-libnvinfer-dev -y
# python3 -m pip install protobuf
# apt-get install uff-converter-tf -y
# pip install numpy onnx
# apt-get install onnx-graphsurgeon -y

and I got below.

(...)
Setting up libnvinfer8 (8.4.1-1+cuda11.6) ...
Setting up libnvparsers8 (8.4.1-1+cuda11.6) ...
Setting up libnvinfer-plugin8 (8.4.1-1+cuda11.6) ...
Setting up libnvonnxparsers8 (8.4.1-1+cuda11.6) ...
Setting up libnvinfer-dev (8.4.1-1+cuda11.6) ...
Setting up libnvonnxparsers-dev (8.4.1-1+cuda11.6) ...
Setting up libnvparsers-dev (8.4.1-1+cuda11.6) ...
Setting up libnvinfer-plugin-dev (8.4.1-1+cuda11.6) ...
Setting up libnvinfer-bin (8.4.1-1+cuda11.6) ...
Setting up libnvinfer-samples (8.4.1-1+cuda11.6) ...
Setting up tensorrt (8.4.1.5-1+cuda11.6) ...
(...)
Setting up python3-libnvinfer (8.4.1-1+cuda11.6) ...
Setting up python3-libnvinfer-dev (8.4.1-1+cuda11.6) ..
(...)
Setting up graphsurgeon-tf (8.4.1-1+cuda11.6) ...
Setting up uff-converter-tf (8.4.1-1+cuda11.6) ...
(...)
Setting up onnx-graphsurgeon (8.4.1-1+cuda11.6) ...

and I can check tensorrt installation command “” and got below.

# dpkg -l | grep TensorRT
ii  graphsurgeon-tf                                             8.4.1-1+cuda11.6                 amd64        GraphSurgeon for TensorRT package
ii  libnvinfer-bin                                              8.4.1-1+cuda11.6                 amd64        TensorRT binaries
ii  libnvinfer-dev                                              8.4.1-1+cuda11.6                 amd64        TensorRT development libraries and headers
ii  libnvinfer-plugin-dev                                       8.4.1-1+cuda11.6                 amd64        TensorRT plugin libraries
ii  libnvinfer-plugin8                                          8.4.1-1+cuda11.6                 amd64        TensorRT plugin libraries
ii  libnvinfer-samples                                          8.4.1-1+cuda11.6                 all          TensorRT samples
ii  libnvinfer8                                                 8.4.1-1+cuda11.6                 amd64        TensorRT runtime libraries
ii  libnvonnxparsers-dev                                        8.4.1-1+cuda11.6                 amd64        TensorRT ONNX libraries
ii  libnvonnxparsers8                                           8.4.1-1+cuda11.6                 amd64        TensorRT ONNX libraries
ii  libnvparsers-dev                                            8.4.1-1+cuda11.6                 amd64        TensorRT parsers libraries
ii  libnvparsers8                                               8.4.1-1+cuda11.6                 amd64        TensorRT parsers libraries
ii  onnx-graphsurgeon                                           8.4.1-1+cuda11.6                 amd64        ONNX GraphSurgeon for TensorRT package
ii  python3-libnvinfer                                          8.4.1-1+cuda11.6                 amd64        Python 3 bindings for TensorRT
ii  python3-libnvinfer-dev                                      8.4.1-1+cuda11.6                 amd64        Python 3 development package for TensorRT
ii  tensorrt                                                    8.4.1.5-1+cuda11.6               amd64        Meta package for TensorRT
ii  uff-converter-tf                                            8.4.1-1+cuda11.6                 amd64        UFF converter for TensorRT package

And I tested convert onnx to trt.
(Quick Start Guide :: NVIDIA Deep Learning TensorRT Documentation)

# wget https://s3.amazonaws.com/download.onnx/models/opset_8/resnet50.tar.gz
# tar xzf resnet50.tar.gz
# trtexec --onnx=resnet50/model.onnx --saveEngine=resnet50/resnet_engine.trt
bash: trtexec: command not found

So, I edited ~/.bashrc and retried.
(ex: alias trtexec=“/usr/src/tensorrt/bin/trtexec”)

I got below.

&&&& PASSED TensorRT.trtexec [TensorRT v8401] # /usr/src/tensorrt/bin/trtexec --onnx=resnet50/model.onnx --saveEngine=resnet50/resnet_engine.trt

6.Deepstream Installation

6-1 dependency install

# sudo apt install \
libssl1.1 \
libgstreamer1.0-0 \
gstreamer1.0-tools \
gstreamer1.0-plugins-good \
gstreamer1.0-plugins-bad \
gstreamer1.0-plugins-ugly \
gstreamer1.0-libav \
libgstreamer-plugins-base1.0-dev \
libgstrtspserver-1.0-0 \
libjansson4 \
libyaml-cpp-dev \
gcc \
make \

git
python3

6-2 librakafka install

# git clone https://github.com/edenhill/librdkafka.git
# cd librdkafka
# git reset --hard 7101c2310341ab3f4675fc565f64f0967e135a6a
# ./configure
# make 

But I got Error

/usr/bin/env: 'python': No such file or directory

So, I edited ~/.bashrc again.
(ex: alias python=“python3”)

# make
(...)
Updating
CONFIGURATION.md CONFIGURATION.md.tmp differ: char 345, line 6
Checking  integrity
CONFIGURATION.md               OK
examples/rdkafka_example       OK
examples/rdkafka_performance   OK
examples/rdkafka_example_cpp   OK
make[1]: Entering directory '/root/workspace/librdkafka/src'
Checking librdkafka integrity
librdkafka.so.1                OK
librdkafka.a                   OK
Symbol visibility              OK
make[1]: Leaving directory '/root/workspace/librdkafka/src'
make[1]: Entering directory '/root/workspace/librdkafka/src-cpp'
Checking librdkafka++ integrity
librdkafka++.so.1              OK
librdkafka++.a                 OK
make[1]: Leaving directory '/root/workspace/librdkafka/src-cpp'

I can “make install” and copy.

# make install
# mkdir -p /opt/nvidia/deepstream/deepstream-6.1/lib
# cp /usr/local/lib/librdkafka* /opt/nvidia/deepstream/deepstream-6.1/lib -v

I got below.

'/usr/local/lib/librdkafka++.a' -> '/opt/nvidia/deepstream/deepstream-6.1/lib/librdkafka++.a'
'/usr/local/lib/librdkafka++.so' -> '/opt/nvidia/deepstream/deepstream-6.1/lib/librdkafka++.so'
'/usr/local/lib/librdkafka++.so.1' -> '/opt/nvidia/deepstream/deepstream-6.1/lib/librdkafka++.so.1'
'/usr/local/lib/librdkafka.a' -> '/opt/nvidia/deepstream/deepstream-6.1/lib/librdkafka.a'
'/usr/local/lib/librdkafka.so' -> '/opt/nvidia/deepstream/deepstream-6.1/lib/librdkafka.so'
'/usr/local/lib/librdkafka.so.1' -> '/opt/nvidia/deepstream/deepstream-6.1/lib/librdkafka.so.1'

6.3 (Finally I can) install deepstream.

# apt-get install ./deepstream-6.1_6.1.1-1_amd64.deb

i got below.

---------------------------------------------------------------------------------------
NOTE: sources and samples folders will be found in /opt/nvidia/deepstream/deepstream-6.1
---------------------------------------------------------------------------------------
Processing triggers for libc-bin (2.31-0ubuntu9.9) ...
N: Download is performed unsandboxed as root as file '/root/sharespace/deepstream-6.1_6.1.1-1_amd64.deb' couldn't be accessed by user '_apt'. - pkgAcquire::Run (13: Permission denied)

I saw “Permission denied”. but I think it will be worked. So I passed.

7. deepstream-python-apps
(deepstream_python_apps/bindings at master · NVIDIA-AI-IOT/deepstream_python_apps · GitHub)

7-1 install depenency

# apt install python3-gi python3-dev python3-gst-1.0 python-gi-dev git python-dev \
    python3 python3-pip python3.8-dev cmake g++ build-essential libglib2.0-dev \
    libglib2.0-dev-bin libgstreamer1.0-dev libtool m4 autoconf automake libgirepository1.0-dev libcairo2-dev

7-2 clone /source

# cd /opt/nvidia/deepstream/deepstream-6.1/sources/&&
# git clone https://github.com/NVIDIA-AI-IOT/deepstream_python_apps.git

7-3. initializes them

# git submodule update --init

I got below.

Submodule path '3rdparty/gst-python': checked out '1a8f48a6c2911b308231a3843f771a50775cbd2e'
Submodule path '3rdparty/pybind11': checked out '3b1dbebabc801c9cf6f0953a4c20b904d444f879'

7.4 Installing Gst-python

# apt-get install -y apt-transport-https ca-certificates -y sudo update-ca-certificates

I got below.

done.
done.

and make and make install

# cd 3rdparty/gst-python/
# ./autogen.sh
# make
# sudo make install

7-5. Compiling the bindings

# cd deepstream_python_apps/bindings
# mkdir build
# cd build
# cmake ..
# make

I got below.

(...)
removing build/bdist.linux-x86_64/wheel
[100%] Built target pip_wheel

and I can install using whl.

# pip3 install ./pyds-1.1.4-py3-none*.whl

I got below.

Successfully installed pgi-0.0.11.2 pycairo-1.23.0 pyds-1.1.4

**8. running deepstream python apps *

8.1 first try

python deepstream_test_1.py ../../../../samples/streams/sample_720p.h264 

(gst-plugin-scanner:18302): GStreamer-WARNING **: 16:33:44.981: Failed to load plugin '/usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_deepstream_bins.so': libjson-glib-1.0.so.0: cannot open shared object file: No such file or directory

(gst-plugin-scanner:18302): GStreamer-WARNING **: 16:33:46.147: Failed to load plugin '/usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_ucx.so': libucs.so.0: cannot open shared object file: No such file or directory

(gst-plugin-scanner:18302): GStreamer-WARNING **: 16:33:46.157: Failed to load plugin '/usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_inferserver.so': libtritonserver.so: cannot open shared object file: No such file or directory

(gst-plugin-scanner:18302): GStreamer-WARNING **: 16:33:46.157: Failed to load plugin '/usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_msgconv.so': libjson-glib-1.0.so.0: cannot open shared object file: No such file or directory

(gst-plugin-scanner:18302): GStreamer-WARNING **: 16:33:46.192: Failed to load plugin '/usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_udp.so': librivermax.so.0: cannot open shared object file: No such file or directory
Creating Pipeline 
 
Creating Source 
 
Creating H264Parser 

Creating Decoder 

Creating EGLSink 

Playing file ../../../../samples/streams/sample_720p.h264 
Adding elements to Pipeline 

Linking elements in the Pipeline 

Starting pipeline 

libEGL warning: MESA-LOADER: failed to retrieve device information

libEGL warning: DRI2: could not open /dev/dri/card0 (No such file or directory)
0:00:01.826674351 18301      0x2863a10 WARN                 nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1170> [UID = 1]: Warning, OpenCV has been deprecated. Using NMS for clustering instead of cv::groupRectangles with topK = 20 and NMS Threshold = 0.5
WARNING: ../nvdsinfer/nvdsinfer_model_builder.cpp:1482 Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-6.1/sources/deepstream_python_apps/apps/deepstream-test1/../../../../samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine open error
0:00:02.801061687 18301      0x2863a10 WARN                 nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1897> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-6.1/sources/deepstream_python_apps/apps/deepstream-test1/../../../../samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine failed
0:00:02.862321131 18301      0x2863a10 WARN                 nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2002> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-6.1/sources/deepstream_python_apps/apps/deepstream-test1/../../../../samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine failed, try rebuild
0:00:02.862343544 18301      0x2863a10 INFO                 nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1923> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: The implicit batch dimension mode has been deprecated. Please create the network with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag whenever possible.
0:00:26.351014910 18301      0x2863a10 INFO                 nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1955> [UID = 1]: serialize cuda engine to file: /opt/nvidia/deepstream/deepstream-6.1/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine successfully
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input_1         3x368x640       
1   OUTPUT kFLOAT conv2d_bbox     16x23x40        
2   OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40         

0:00:26.413938312 18301      0x2863a10 INFO                 nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<primary-inference> [UID 1]: Load new model:dstest1_pgie_config.txt sucessfully
Error: gst-resource-error-quark: Device '/dev/nvidia0' failed during initialization (1): gstv4l2object.c(4118): gst_v4l2_object_set_format_full (): /GstPipeline:pipeline0/nvv4l2decoder:nvv4l2-decoder:
Call to S_FMT failed for H264 @ 1280x720: Unknown error -1

(python3:18301): GStreamer-CRITICAL **: 16:34:11.360: gst_structure_set_parent_refcount: assertion 'refcount != NULL' failed
Segmentation fault (core dumped)
root@a2354b964bf6:/opt/nvidia/deepstream/deepstream-6.1/sources/deepstream_python_apps/apps/deepstrea
m-test1# python deepstream_test_1.py ../../../../samples/streams/sample_720p.h264  > 001.txt
libEGL warning: MESA-LOADER: failed to retrieve device information

libEGL warning: DRI2: could not open /dev/dri/card0 (No such file or directory)
0:00:01.264888574 18316      0x3714810 WARN                 nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1170> [UID = 1]: Warning, OpenCV has been deprecated. Using NMS for clustering instead of cv::groupRectangles with topK = 20 and NMS Threshold = 0.5
0:00:02.249656463 18316      0x3714810 INFO                 nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1909> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.1/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
0:00:02.309652677 18316      0x3714810 INFO                 nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2012> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.1/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
0:00:02.311040327 18316      0x3714810 INFO                 nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<primary-inference> [UID 1]: Load new model:dstest1_pgie_config.txt sucessfully
Error: gst-resource-error-quark: Device '/dev/nvidia0' failed during initialization (1): gstv4l2object.c(4118): gst_v4l2_object_set_format_full (): /GstPipeline:pipeline0/nvv4l2decoder:nvv4l2-decoder:
Call to S_FMT failed for H264 @ 1280x720: Unknown error -1

and closed.

8.2 second try
I installed libnvidia-encode-525 like below.(Actually, I wonder this is right way)

# apt-get install libnvidia-encode-525

and I got below.

Fetched 51.9 MB in 13s (3960 kB/s)                                                                                                                                                                        
debconf: delaying package configuration, since apt-utils is not installed
Selecting previously unselected package libnvidia-compute-525:amd64.
(Reading database ... 40004 files and directories currently installed.)
Preparing to unpack .../libnvidia-compute-525_525.60.11-0ubuntu0.20.04.2_amd64.deb ...
Unpacking libnvidia-compute-525:amd64 (525.60.11-0ubuntu0.20.04.2) ...
dpkg: error processing archive /var/cache/apt/archives/libnvidia-compute-525_525.60.11-0ubuntu0.20.04.2_amd64.deb (--unpack):
 unable to make backup link of './usr/lib/x86_64-linux-gnu/libcuda.so.525.60.11' before installing new version: Invalid cross-device link
dpkg-deb: error: paste subprocess was killed by signal (Broken pipe)
Selecting previously unselected package libnvidia-decode-525:amd64.
Preparing to unpack .../libnvidia-decode-525_525.60.11-0ubuntu0.20.04.2_amd64.deb ...
Unpacking libnvidia-decode-525:amd64 (525.60.11-0ubuntu0.20.04.2) ...
Selecting previously unselected package libnvidia-encode-525:amd64.
Preparing to unpack .../libnvidia-encode-525_525.60.11-0ubuntu0.20.04.2_amd64.deb ...
Unpacking libnvidia-encode-525:amd64 (525.60.11-0ubuntu0.20.04.2) ...
Errors were encountered while processing:
 /var/cache/apt/archives/libnvidia-compute-525_525.60.11-0ubuntu0.20.04.2_amd64.deb
E: Sub-process /usr/bin/dpkg returned an error code (1)

some error was showed. but I tried running deepstream-app.

python deepstream_test_1.py ../../../../samples/streams/sample_720p.h264 

I got beow (Finally I got some inference results!! But…)

python deepstream_test_1.py ../../../../samples/streams/sample_720p.h264           
Creating Pipeline 
 
Creating Source 
 
Creating H264Parser 

Creating Decoder 

Creating EGLSink 

Playing file ../../../../samples/streams/sample_720p.h264 
Adding elements to Pipeline 

Linking elements in the Pipeline 

Starting pipeline 

libEGL warning: MESA-LOADER: failed to retrieve device information

libEGL warning: DRI2: could not open /dev/dri/card0 (No such file or directory)
0:00:01.256016324 18381      0x23a9810 WARN                 nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1170> [UID = 1]: Warning, OpenCV has been deprecated. Using NMS for clustering instead of cv::groupRectangles with topK = 20 and NMS Threshold = 0.5
0:00:02.236184303 18381      0x23a9810 INFO                 nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1909> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.1/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input_1         3x368x640       
1   OUTPUT kFLOAT conv2d_bbox     16x23x40        
2   OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40         

0:00:02.296249198 18381      0x23a9810 INFO                 nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2012> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.1/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
0:00:02.297609952 18381      0x23a9810 INFO                 nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<primary-inference> [UID 1]: Load new model:dstest1_pgie_config.txt sucessfully
cuGraphicsGLRegisterBuffer failed with error(304) gst_eglglessink_cuda_init texture = 1
Frame Number=0 Number of Objects=13 Vehicle_count=9 Person_count=4
0:00:02.485631809 18381      0x23abaa0 WARN                 nvinfer gstnvinfer.cpp:2300:gst_nvinfer_output_loop:<primary-inference> error: Internal data stream error.
0:00:02.485647245 18381      0x23abaa0 WARN                 nvinfer gstnvinfer.cpp:2300:gst_nvinfer_output_loop:<primary-inference> error: streaming stopped, reason not-negotiated (-4)
Error: gst-stream-error-quark: Internal data stream error. (1): gstnvinfer.cpp(2300): gst_nvinfer_output_loop (): /GstPipeline:pipeline0/GstNvInfer:primary-inference:
streaming stopped, reason not-negotiated (-4)
Frame Number=1 Number of Objects=11 Vehicle_count=8 Person_count=3
Frame Number=2 Number of Objects=11 Vehicle_count=7 Person_count=4
Frame Number=3 Number of Objects=11 Vehicle_count=7 Person_count=4
Frame Number=4 Number of Objects=11 Vehicle_count=8 Person_count=3
Frame Number=5 Number of Objects=12 Vehicle_count=8 Person_count=4
Frame Number=6 Number of Objects=11 Vehicle_count=7 Person_count=4

and closed.

some article that similar problem. they got “deleting pipeline”. but I didn’t got that line. “Frame Number=6 Number of Objects=11 Vehicle_count=7 Person_count=4” is last line. (Always “frame 6” is last line.)

And I could see some window appear and gone soon. Maybe It will be result screen with bbox. (it’s like a “cv.imshow()” without “cv.waitKey()”)

I add “time.sleep(0.5)” in while loop(in “deepstream_test_1.py”) It went just “frame 2”.
When I edit config file(“dstest1_pgie_config.txt”) It change Number of Objects’s value. (WOW, I can’t wait for)

• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

My request is these.

  1. Is there missing thing in my progress?
  2. Is it right that install "libnvidia-encode-525 (Step 8-2)

Thank you.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

Hello,
Nvidia has created DeepStream container and you can use it directly instead of creating from ubuntu and installing dependencies/deepstream from scratch, you can find the DeepStream container on NGC: DeepStream | NVIDIA NGC , you can use tag “6.1.1-devel” if you want to install deepstream_python_apps.

You need to install nvidia-docker2 tools from GitHub - NVIDIA/nvidia-docker: Build and run Docker containers leveraging NVIDIA GPUs.

After you have installed nvidia docker tools and pull the DeepStream container, you can install deepstream_python_apps as you did before.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.