Running the Deepstream Python apps in nvcr.io/nvidia/deepstream:4.0.2-19.12-devel container, headles

Hello,

I’ve pulled the deepstream:4.0.2-19.12-devel container and from it I built a new container strating from it where I have installed the Python bindings according to the HOWTO in GitHub - NVIDIA-AI-IOT/deepstream_python_apps: DeepStream SDK Python bindings and sample applications.
I am running the container with the following command where test_deeptstream_build is the container i’ve built:
docker run --gpus all -it --rm -p 8554:8554 -w /root test_deepstream_build:latest
I changed the ‘sink’ to ‘fakesink’ in the pipeline in deepstream_test_1.py.
To run I use the next command: python3 deepstream_test_1.py rtsp://[link]
I get the following error:

0:00:01.000833168 17 0xfbbd0a0 INFO nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger: NvDsInferContext[UID 1]:initialize(): Trying to create engine from model files
0:00:01.585835799 17 0xfbbd0a0 ERROR nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger: NvDsInferContext[UID 1]:log(): …/rtSafe/safeContext.cpp (105) - Cudnn Error in initializeCommonContext: 4 (Could not initialize cudnn, please check cudnn installation.)
0:00:01.586004949 17 0xfbbd0a0 ERROR nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger: NvDsInferContext[UID 1]:log(): …/rtSafe/safeContext.cpp (105) - Cudnn Error in initializeCommonContext: 4 (Could not initialize cudnn, please check cudnn installation.)
0:00:01.586039349 17 0xfbbd0a0 ERROR nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger: NvDsInferContext[UID 1]:generateTRTModel(): Failed while building cuda engine for network
0:00:01.586180169 17 0xfbbd0a0 ERROR nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger: NvDsInferContext[UID 1]:initialize(): Failed to create engine from model files
0:00:01.586217909 17 0xfbbd0a0 WARN nvinfer gstnvinfer.cpp:692:gst_nvinfer_start: error: Failed to create NvDsInferContext instance
0:00:01.586224679 17 0xfbbd0a0 WARN nvinfer gstnvinfer.cpp:692:gst_nvinfer_start: error: Config file path: dstest1_pgie_config.txt, NvDsInfer Error: NVDSINFER_TENSORRT_ERROR
Error: gst-resource-error-quark: Failed to create NvDsInferContext instance (1): gstnvinfer.cpp(692): gst_nvinfer_start (): /GstPipeline:pipeline0/GstNvInfer:primary-inference:
Config file path: dstest1_pgie_config.txt, NvDsInfer Error: NVDSINFER_TENSORRT_ERROR

Any ideas what is missing? Thanks a lot!

from it I built a new container
What are the changes between your docker and deepstream:4.0.2-19.12-devel?

From error log -

0:00:01.585835799 17 0xfbbd0a0 ERROR nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger: NvDsInferContext[UID 1]:log(): …/rtSafe/safeContext.cpp (105) - Cudnn Error in initializeCommonContext: 4 (Could not initialize cudnn, please check cudnn installation.)

seems the cudnn is not installed correctly, you can launch deepstream:4.0.2-19.12-devel and check the CUDA, cuDNN difference, e.g

# find /usr/ -name “libcudnn*”
/usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.5
/usr/lib/x86_64-linux-gnu/libcudnn.so
/usr/lib/x86_64-linux-gnu/libcudnn_static_v7.a
/usr/lib/x86_64-linux-gnu/libcudnn_static.a
/usr/lib/x86_64-linux-gnu/libcudnn.so.7
/usr/share/lintian/overrides/libcudnn7
/usr/share/lintian/overrides/libcudnn7-dev
/usr/share/doc/libcudnn7
/usr/share/doc/libcudnn7-dev

# ls -l /usr/local/cuda
lrwxrwxrwx 1 root root 9 Aug 26 2019 /usr/local/cuda → cuda-10.1

Hi, thanks!
I’ve checked and cudnn seems to be installed correctly:

root@c8d48ff90de9:~# find /usr/ -name "libcudnn*"
/usr/share/doc/libcudnn7
/usr/share/doc/libcudnn7-dev
/usr/share/lintian/overrides/libcudnn7
/usr/share/lintian/overrides/libcudnn7-dev
/usr/lib/x86_64-linux-gnu/libcudnn.so
/usr/lib/x86_64-linux-gnu/libcudnn_static_v7.a
/usr/lib/x86_64-linux-gnu/libcudnn_static.a
/usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.5
/usr/lib/x86_64-linux-gnu/libcudnn.so.7
root@c8d48ff90de9:~# ls -l /usr/local/cuda
lrwxrwxrwx 1 root root 9 Aug 26  2019 /usr/local/cuda -> cuda-10.1

The Dockerfile for the new container:

FROM nvcr.io/nvidia/deepstream:4.0.2-19.12-devel

ADD python_release_bind /root/deepstream_sdk_v4.0.2_x86_64/sources/python

RUN ls /app

RUN apt-get update -y
RUN apt-get install python-gi-dev -y
RUN export GST_LIBS="-lgstreamer-1.0 -lgobject-2.0 -lglib-2.0"
RUN export GST_CFLAGS="-pthread -I/usr/include/gstreamer-1.0 -I/usr/include/glib-2.0 -I/usr/lib/x86_64-linux-gnu/glib-2.0/include"
RUN apt-get install git -y
RUN cd /app/panel_factory/research/deepstream
RUN git clone https://github.com/GStreamer/gst-python.git
RUN cd gst-python && \
    git checkout 1a8f48a && \
    apt-get install autoconf automake libtool python-dev python3-dev libgstreamer1.0-dev -y && \
   ./autogen.sh PYTHON=python3 && \
   ./configure PYTHON=python3 && \
    make &&\
    make install

EXPOSE 8554

RUN ls root/deepstream_sdk_v4.0.2_x86_64/sources
RUN ls root/deepstream_sdk_v4.0.2_x86_64/sources/python

I tried your dockerfile as below, I can’t reproduce your issue.

Build the docker
$ docker build -t mchi_ds_test_docker --network=host .

docker file (referring to your dockerfile)

FROM nvcr.io/nvidia/deepstream:4.0.2-19.12-devel

ADD python_release_bind /root/deepstream_sdk_v4.0.2_x86_64/sources/python

#RUN ls /app

RUN apt-get update -y
RUN apt-get install python-gi-dev -y
RUN export GST_LIBS="-lgstreamer-1.0 -lgobject-2.0 -lglib-2.0"
RUN export GST_CFLAGS="-pthread -I/usr/include/gstreamer-1.0 -I/usr/include/glib-2.0 -I/usr/lib/x86_64-linux-gnu/glib-2.0/include"
RUN apt-get install git -y
RUN mkdir -p /app/panel_factory/research/deepstream
RUN cd /app/panel_factory/research/deepstream
RUN git clone https://github.com/GStreamer/gst-python.git

RUN cd gst-python && \
    git checkout 1a8f48a && \
    apt-get install autoconf automake libtool python-dev python3-dev libgstreamer1.0-dev -y && \
   ./autogen.sh PYTHON=python3 && \
   ./configure PYTHON=python3 && \
    make &&\
    make install

EXPOSE 8554

RUN ls /root/deepstream_sdk_v4.0.2_x86_64/sources
RUN ls /root/deepstream_sdk_v4.0.2_x86_64/sources/python

Launch the docker
$ docker run --gpus all -it --rm -p 8554:8554 -w /root mchi_ds_test_docker

Operate in docker
1). change “sink = Gst.ElementFactory.make(“nveglglessink”, “nvvideo-renderer”)” to “sink = Gst.ElementFactory.make(“fakesink”, “fakesink”)”

2). run
root@72df2daef6ea:~/deepstream_sdk_v4.0.2_x86_64/sources/python/apps/deepstream-test1# python3 deepstream_test_1.py /root/deepstream_sdk_v4.0.2_x86_64/samples/streams/sample_720p.h264
Creating Pipeline

Creating Source

Creating H264Parser

Creating Decoder

Creating EGLSink

Playing file /root/deepstream_sdk_v4.0.2_x86_64/samples/streams/sample_720p.h264
Adding elements to Pipeline

Linking elements in the Pipeline

Starting pipeline

Creating LL OSD context new
0:00:00.500260585 161 0x1a68190 INFO nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger: NvDsInferContext[UID 1]:initialize(): Trying to create engine from model files
0:00:09.939989665 161 0x1a68190 INFO nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger: NvDsInferContext[UID 1]:generateTRTModel(): Storing the serialized cuda engine to file at /root/deepstream_sdk_v4.0.2_x86_64/samples/models/Primary_Detector/resnet10.caffemodel_b1_int8.engine
Creating LL OSD context new
Frame Number=0 Number of Objects=5 Vehicle_count=3 Person_count=2
Frame Number=1 Number of Objects=5 Vehicle_count=3 Person_count=2
Frame Number=2 Number of Objects=5 Vehicle_count=3 Person_count=2
Frame Number=3 Number of Objects=6 Vehicle_count=4 Person_count=2
Frame Number=4 Number of Objects=6 Vehicle_count=4 Person_count=2
Frame Number=5 Number of Objects=6 Vehicle_count=4 Person_count=2
Frame Number=6 Number of Objects=5 Vehicle_count=3 Person_count=2
Frame Number=7 Number of Objects=6 Vehicle_count=4 Person_count=2

Okay, thanks for trying it. It also works on my side with the sample video. Then I guess the issue might be from trying to run the test app with an RSTP stream. Do you know if there’s something else that needs to be changed in order to use an RSTP stream?

I don’t think it caused by file source, i.t. RTSP since the failure is about cuDNN.
And, I tried with RTSP, it can build the TensorRT engine successfully as below.

Are you using the default model/engine or your own model/engine?

root@72df2daef6ea:~/deepstream_sdk_v4.0.2_x86_64/sources/python/apps/deepstream-test1# python3 deepstream_test_1.py rtsp://freja.hiof.no:1935/rtplive/definst/hessdalen03.stream
Creating Pipeline

Creating Source

Creating H264Parser

Creating Decoder

Creating EGLSink

Playing file rtsp://freja.hiof.no:1935/rtplive/definst/hessdalen03.stream
Adding elements to Pipeline

Linking elements in the Pipeline

Starting pipeline

Creating LL OSD context new
0:00:00.494395481 289 0x272b990 INFO nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger: NvDsInferContext[UID 1]:initialize(): Trying to create engine from model files
0:00:09.998787402 289 0x272b990 INFO nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger: NvDsInferContext[UID 1]:generateTRTModel(): Storing the serialized cuda engine to file at /root/deepstream_sdk_v4.0.2_x86_64/samples/models/Primary_Detector/resnet10.caffemodel_b1_int8.engine
Error: gst-resource-error-quark: Resource not found. (3): gstfilesrc.c(533): gst_file_src_start (): /GstPipeline:pipeline0/GstFileSrc:file-source:
No such file “rtsp://freja.hiof.no:1935/rtplive/definst/hessdalen03.stream”
root@72df2daef6ea:~/deepstream_sdk_v4.0.2_x86_64/sources/python/apps/deepstream-test1# ls -l /root/deepstream_sdk_v4.0.2_x86_64/samples/models/Primary_Detector/resnet10.caffemodel_b1_int8.engine
-rw-r–r-- 1 root root 4405865 Mar 6 08:10 /root/deepstream_sdk_v4.0.2_x86_64/samples/models/Primary_Detector/resnet10.caffemodel_b1_int8.engine

I am using the default model.
I managed to get it to run just like your example above and just like you I get the next error:

Error: gst-resource-error-quark: Resource not found. (3): gstfilesrc.c(533): gst_file_src_start (): /GstPipeline:pipeline0/GstFileSrc:file-source:
No such file "rtsp://admin:Hik.1234567890@1[IP]/cam/realmonitor?channel=1"

I checked the RSTP stream with VLC and it works, it is correctly streaming, and I do have response when I ping the camera’s IP from the running container.

deepstream-test1 only accept local file source, for rtsp source, you could refer to deepstream-test3.

Okay, thanks. Can deepstream-test3 run headless?

I don’t think so, but you can change “sink = Gst.ElementFactory.make(“nveglglessink”, “nvvideo-renderer”)” to “sink = Gst.ElementFactory.make(“fakesink”, “fakesink”)”

Okay, thanks for the help! I’ll try.