Requirements of a host to run a deepstream in a container

Hi,

I have tried to run a deepstream, deepstream-test1 to be exact, on my Ubuntu host boxes,
one running Ubuntu 18.04.3 with T4 and another running Ubuntu 18.04.1 with GTX 1080,
but both of which failed to run as follows.

T4

root@435228fe766a:~/deepstream_sdk_v4.0.1_x86_64/sources/apps/sample_apps/deepstream-test1# make
cc -c -o deepstream_test1_app.o -I../../../includes `pkg-config --cflags gstreamer-1.0` deepstream_test1_app.c
cc -o deepstream-test1-app deepstream_test1_app.o `pkg-config --libs gstreamer-1.0` -L/opt/nvidia/deepstream/deepstream-4.0/lib/ -lnvdsgst_meta -lnvds_meta -Wl,-rpath,/opt/nvidia/deepstream/deepstream-4.0/lib/
root@435228fe766a:~/deepstream_sdk_v4.0.1_x86_64/sources/apps/sample_apps/deepstream-test1# 
root@435228fe766a:~/deepstream_sdk_v4.0.1_x86_64/sources/apps/sample_apps/deepstream-test1# 
root@435228fe766a:~/deepstream_sdk_v4.0.1_x86_64/sources/apps/sample_apps/deepstream-test1# ./deepstream-test1-app ~/deepstream_sdk_v4.0.1_x86_64/samples/streams/sample_
sample_1080p_h264.mp4  sample_720p.h264       sample_720p.mjpeg      sample_cam6.mp4        
sample_1080p_h265.mp4  sample_720p.jpg        sample_720p.mp4        sample_industrial.jpg  
root@435228fe766a:~/deepstream_sdk_v4.0.1_x86_64/sources/apps/sample_apps/deepstream-test1# ./deepstream-test1-app ~/deepstream_sdk_v4.0.1_x86_64/samples/streams/sample_720p.h264 
Now playing: /root/deepstream_sdk_v4.0.1_x86_64/samples/streams/sample_720p.h264
libEGL warning: DRI2: could not open /dev/dri/card0 (No such file or directory)
Creating LL OSD context new
0:00:01.270221449    24 0x56191dbfa030 INFO                 nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]:initialize(): Trying to create engine from model files
0:00:12.859147260    24 0x56191dbfa030 INFO                 nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]:generateTRTModel(): Storing the serialized cuda engine to file at /root/deepstream_sdk_v4.0.1_x86_64/samples/models/Primary_Detector/resnet10.caffemodel_b1_int8.engine
Running...
Creating LL OSD context new
cuGraphicsGLRegisterBuffer failed with error(219) gst_eglglessink_cuda_init texture = 1
Frame Number = 0 Number of objects = 5 Vehicle Count = 3 Person Count = 2
0:00:13.243452669    24 0x56191514a590 WARN                 nvinfer gstnvinfer.cpp:1830:gst_nvinfer_output_loop:<primary-nvinference-engine> error: Internal data stream error.
0:00:13.243477385    24 0x56191514a590 WARN                 nvinfer gstnvinfer.cpp:1830:gst_nvinfer_output_loop:<primary-nvinference-engine> error: streaming stopped, reason not-negotiated (-4)
ERROR from element primary-nvinference-engine: Internal data stream error.
Error details: gstnvinfer.cpp(1830): gst_nvinfer_output_loop (): /GstPipeline:dstest1-pipeline/GstNvInfer:primary-nvinference-engine:
streaming stopped, reason not-negotiated (-4)
Returned, stopping playback
Frame Number = 1 Number of objects = 5 Vehicle Count = 3 Person Count = 2
Frame Number = 2 Number of objects = 6 Vehicle Count = 4 Person Count = 2
Frame Number = 3 Number of objects = 6 Vehicle Count = 4 Person Count = 2
Frame Number = 4 Number of objects = 5 Vehicle Count = 3 Person Count = 2
Frame Number = 5 Number of objects = 5 Vehicle Count = 3 Person Count = 2
Frame Number = 6 Number of objects = 5 Vehicle Count = 3 Person Count = 2
Frame Number = 7 Number of objects = 6 Vehicle Count = 4 Person Count = 2
Deleting pipeline

GTX 1080

$ sudo docker run --gpus all -it --rm -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=$DISPLAY -w /root nvcr.io/nvidia/deepstream:4.0.2-19.12-devel
[sudo] password for tsato: 
root@27dc424b5f39:~# cd deepstream_sdk_v4.0.2_x86_64/sources/apps/sample_apps/deepstream-test1/
root@27dc424b5f39:~/deepstream_sdk_v4.0.2_x86_64/sources/apps/sample_apps/deepstream-test1# make
cc -c -o deepstream_test1_app.o -I../../../includes `pkg-config --cflags gstreamer-1.0` deepstream_test1_app.c
cc -o deepstream-test1-app deepstream_test1_app.o `pkg-config --libs gstreamer-1.0` -L/opt/nvidia/deepstream/deepstream-4.0/lib/ -lnvdsgst_meta -lnvds_meta -Wl,-rpath,/opt/nvidia/deepstream/deepstream-4.0/lib/
root@27dc424b5f39:~/deepstream_sdk_v4.0.2_x86_64/sources/apps/sample_apps/deepstream-test1# deepstream-test1-app ~/deepstream_sdk_v4.0.2_x86_64/samples/streams/sample_720p.h264 
Now playing: /root/deepstream_sdk_v4.0.2_x86_64/samples/streams/sample_720p.h264
Creating LL OSD context new
0:00:09.960310623    24 0x55ab5c97ee30 INFO                 nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]:initialize(): Trying to create engine from model files
0:00:22.261357273    24 0x55ab5c97ee30 INFO                 nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]:generateTRTModel(): Storing the serialized cuda engine to file at /root/deepstream_sdk_v4.0.2_x86_64/samples/models/Primary_Detector/resnet10.caffemodel_b1_int8.engine
Running...
Creating LL OSD context new
Frame Number = 0 Number of objects = 5 Vehicle Count = 3 Person Count = 2
...
Frame Number = 146 Number of objects = 7 Vehicle Count = 3 Person Count = 4
ERROR from element nvvideo-renderer: Output window was closed
Error details: ext/eglgles/gsteglglessink.c(894): gst_eglglessink_event_thread (): /GstPipeline:dstest1-pipeline/GstEglGlesSink:nvvideo-renderer
Returned, stopping playback
Frame Number = 147 Number of objects = 7 Vehicle Count = 3 Person Count = 4
Frame Number = 148 Number of objects = 9 Vehicle Count = 5 Person Count = 4
Frame Number = 149 Number of objects = 11 Vehicle Count = 5 Person Count = 6
Frame Number = 150 Number of objects = 9 Vehicle Count = 5 Person Count = 4
Deleting pipeline

The nvidia driver version is 440.64.00 on both.

    $ nvidia-smi 
    Mon Mar 16 19:08:49 2020       
    +-----------------------------------------------------------------------------+
    | NVIDIA-SMI 440.64.00    Driver Version: 440.64.00    CUDA Version: 10.2     |
    |-------------------------------+----------------------+----------------------+
    | GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
    | Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
    |===============================+======================+======================|
    |   0  GeForce GTX 1080    On   | 00000000:05:00.0  On |                  N/A |
    |  0%   50C    P0    40W / 200W |    412MiB /  8116MiB |      0%      Default |
    +-------------------------------+----------------------+----------------------+
                                                                                   
    +-----------------------------------------------------------------------------+
    | Processes:                                                       GPU Memory |
    |  GPU       PID   Type   Process name                             Usage      |
    |=============================================================================|
    |    0      2440      G   /usr/lib/xorg/Xorg                           257MiB |
    |    0      2960      G   /usr/bin/gnome-shell                         149MiB |
    |    0      5591      G   /usr/lib/firefox/firefox                       1MiB |
    +-----------------------------------------------------------------------------+

The document on NGC mentions only two requirements as below. But what else are required to run a deepstream in a container?

Prerequisites

Ensure these prerequisites are available on your system:

    nvidia-docker We recommend using Docker 19.03 along with the latest nvidia-container-toolkit as described in the installation steps. Usage of nvidia-docker2 packages in conjunction with prior docker versions is now deprecated.

    NVIDIA display driver version 418+

Thanks.

but both of which failed to run as follows.

This was not correct.
For this specific test, it was successful on the 1080 box.

I have tried the deepstream-test1-app with a fakesink on the T4 box.

# cat deepstream_test1_app.c | grep nvvideo-renderer
  sink = gst_element_factory_make ("fakesink", "nvvideo-renderer");
# make
cc -c -o deepstream_test1_app.o -I../../../includes `pkg-config --cflags gstreamer-1.0` deepstream_test1_app.c
cc -o deepstream-test1-app deepstream_test1_app.o `pkg-config --libs gstreamer-1.0` -L/opt/nvidia/deepstream/deepstream-4.0/lib/ -lnvdsgst_meta -lnvds_meta -Wl,-rpath,/opt/nvidia/deepstream/deepstream-4.0/lib/
# ./deepstream-test1-app ~/deepstream_sdk_v4.0.2_x86_64/samples/streams/sample_720p.h264 
Now playing: /root/deepstream_sdk_v4.0.2_x86_64/samples/streams/sample_720p.h264
Creating LL OSD context new
0:00:00.465904358    93 0x564dbe461b80 INFO                 nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]:initialize(): Trying to create engine from model files
0:00:11.041889514    93 0x564dbe461b80 INFO                 nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]:generateTRTModel(): Storing the serialized cuda engine to file at /root/deepstream_sdk_v4.0.2_x86_64/samples/models/Primary_Detector/resnet10.caffemodel_b1_int8.engine
Running...
Creating LL OSD context new
Frame Number = 0 Number of objects = 5 Vehicle Count = 3 Person Count = 2
Frame Number = 1 Number of objects = 5 Vehicle Count = 3 Person Count = 2
...
Frame Number = 1440 Number of objects = 8 Vehicle Count = 6 Person Count = 2
Frame Number = 1441 Number of objects = 0 Vehicle Count = 0 Person Count = 0
End of stream
Returned, stopping playback
Deleting pipeline

Then, I have successfully finished running the app.

Do you mean on T4 platform, fakesink is OK, but eglsink has error ?

Correct.

Run ā€œxhost +ā€ before running the docker
And
$sudo apt-get install x11-xserver-utils

also run in the container,but fakesink is ok with no screen display but egsink meet error.
And # gst-launh-c1.0 videotestsrc ! nveglglessink could run and display.

root@b5feba2eee69:~/deepstream_sdk_v4.0.2_x86_64/sources/apps/sample_apps/deepst
ream-test1# ./deepstream-test1-app ~/deepstream_sdk_v4.0.2_x86_64/samples/streams/sample_720p.h264
Now playing: /root/deepstream_sdk_v4.0.2_x86_64/samples/streams/sample_720p.h264
libEGL warning: DRI2: failed to authenticate
Creating LL OSD context new
0:00:00.297966734 186 0x564b5db7cc40 INFO nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger: NvDsInferContext[UID 1]:initialize(): Trying to create engine from model files
0:00:05.623824554 186 0x564b5db7cc40 INFO nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger: NvDsInferContext[UID 1]:generateTRTModel(): Storing the serialized cuda engine to file at /root/deepstream_sdk_v4.0.2_x86_64/samples/models/Primary_Detector/resnet10.caffemodel_b1_int8.engine
Runningā€¦
Creating LL OSD context new
cuGraphicsGLRegisterBuffer failed with error(304) gst_eglglessink_cuda_init texture = 1
Frame Number = 0 Number of objects = 5 Vehicle Count = 3 Person Count = 2
0:00:05.899011382 186 0x564b54933d90 WARN nvinfer gstnvinfer.cpp:1830:gst_nvinfer_output_loop: error: Internal data stream error.
0:00:05.899024915 186 0x564b54933d90 WARN nvinfer gstnvinfer.cpp:1830:gst_nvinfer_output_loop: error: streaming stopped, reason not-negotiated (-4)
ERROR from element primary-nvinference-engine: Internal data stream error.
Error details: gstnvinfer.cpp(1830): gst_nvinfer_output_loop (): /GstPipeline:dstest1-pipeline/GstNvInfer:primary-nvinference-engine:
streaming stopped, reason not-negotiated (-4)
Returned, stopping playback
Frame Number = 1 Number of objects = 5 Vehicle Count = 3 Person Count = 2
Frame Number = 2 Number of objects = 5 Vehicle Count = 3 Person Count = 2
Frame Number = 3 Number of objects = 6 Vehicle Count = 4 Person Count = 2
Frame Number = 4 Number of objects = 6 Vehicle Count = 4 Person Count = 2
Frame Number = 5 Number of objects = 5 Vehicle Count = 3 Person Count = 2
Frame Number = 6 Number of objects = 6 Vehicle Count = 4 Person Count = 2
Deleting pipeline

Yes. You will get a different result when you run without.

Hereā€™re some more info about my environment.

$ apt search x11-xserver-utils
Sorting... Done
Full Text Search... Done
x11-xserver-utils/bionic,now 7.7+7build1 amd64 [installed,automatic]
  X server utilities
$ env | grep DISPLAY
DISPLAY=:1
$ nvidia-smi 
Wed Mar 25 08:54:28 2020       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 440.64.00    Driver Version: 440.64.00    CUDA Version: 10.2     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  Tesla T4            On   | 00000000:01:00.0 Off |                    0 |
| N/A   26C    P8    11W /  70W |      0MiB / 15109MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+

And the result from xhost +.

emi@t4dev:~$ xhost +
access control disabled, clients can connect from any host
emi@t4dev:~$ sudo docker run --gpus all -it --rm -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=$DISPLAY -w /root nvcr.io/nvidia/deepstream:4.0.2-19.12-devel
root@6ec192271f3a:~# cd deepstream_sdk_v4.0.2_x86_64/sources/apps/sample_apps/deepstream-test1
root@6ec192271f3a:~/deepstream_sdk_v4.0.2_x86_64/sources/apps/sample_apps/deepstream-test1# make
cc -c -o deepstream_test1_app.o -I../../../includes `pkg-config --cflags gstreamer-1.0` deepstream_test1_app.c
cc -o deepstream-test1-app deepstream_test1_app.o `pkg-config --libs gstreamer-1.0` -L/opt/nvidia/deepstream/deepstream-4.0/lib/ -lnvdsgst_meta -lnvds_meta -Wl,-rpath,/opt/nvidia/deepstream/deepstream-4.0/lib/
root@6ec192271f3a:~/deepstream_sdk_v4.0.2_x86_64/sources/apps/sample_apps/deepstream-test1# ./deepstream-test1-app ~/deepstream_sdk_v4.0.2_x86_64/samples/streams/sample_720p.h264 
Now playing: /root/deepstream_sdk_v4.0.2_x86_64/samples/streams/sample_720p.h264
libEGL warning: DRI2: could not open /dev/dri/card0 (No such file or directory)
Creating LL OSD context new
0:00:01.786180848    25 0x55d1ef139630 INFO                 nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]:initialize(): Trying to create engine from model files
0:00:12.652799995    25 0x55d1ef139630 INFO                 nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]:generateTRTModel(): Storing the serialized cuda engine to file at /root/deepstream_sdk_v4.0.2_x86_64/samples/models/Primary_Detector/resnet10.caffemodel_b1_int8.engine
Running...
Creating LL OSD context new
cuGraphicsGLRegisterBuffer failed with error(219) gst_eglglessink_cuda_init texture = 1
Frame Number = 0 Number of objects = 5 Vehicle Count = 3 Person Count = 2
0:00:13.457293976    25 0x55d1e1d95990 WARN                 nvinfer gstnvinfer.cpp:1830:gst_nvinfer_output_loop:<primary-nvinference-engine> error: Internal data stream error.
0:00:13.457311424    25 0x55d1e1d95990 WARN                 nvinfer gstnvinfer.cpp:1830:gst_nvinfer_output_loop:<primary-nvinference-engine> error: streaming stopped, reason not-negotiated (-4)
ERROR from element primary-nvinference-engine: Internal data stream error.
Error details: gstnvinfer.cpp(1830): gst_nvinfer_output_loop (): /GstPipeline:dstest1-pipeline/GstNvInfer:primary-nvinference-engine:
streaming stopped, reason not-negotiated (-4)
Returned, stopping playback
Frame Number = 1 Number of objects = 5 Vehicle Count = 3 Person Count = 2
Frame Number = 2 Number of objects = 5 Vehicle Count = 3 Person Count = 2
Frame Number = 3 Number of objects = 6 Vehicle Count = 4 Person Count = 2
Frame Number = 4 Number of objects = 6 Vehicle Count = 4 Person Count = 2
Frame Number = 5 Number of objects = 6 Vehicle Count = 4 Person Count = 2
Frame Number = 6 Number of objects = 5 Vehicle Count = 3 Person Count = 2
Frame Number = 7 Number of objects = 6 Vehicle Count = 4 Person Count = 2
Deleting pipeline
root@6ec192271f3a:~/deepstream_sdk_v4.0.2_x86_64/sources/apps/sample_apps/deepstream-test1#

It looks like you hit the same problem as mine!

There is a similar post here.

The reason why this error is raised is because the T4 card doesnā€™t have a display.

In our case, we have an onboard Intel graphics, to which X11 is attached. T4 is used only for an inference.

Now that I found the answer, a graphics card has to have a display out (and perhaps which is connected with an actual display) to show a window by a deepstream running in a docker container, will mark this as solved.

You can use a network sink (eg. RTSP) in Docker and stream the video over an exposed port. Thatā€™s what I do. No X11 or display connection needed.

Interesting. Sounds good!

Hi. I am facing the same issue. Would you kindly elaborate on how to implement this part. Please pardon my ignorance. I am a novice in VA. I am working on a T4 GPU server.

RTSP you mean?

I believe the DeepStream app supports rtsp sinks based on GstRtspServer. You just put the proper sink type and port in the config file and expose the port(s) in your dockerfile. My implementation is based on Nvidiaā€™s python implementation and written in Genie (compiles to C).

The Genie sink source is here. The python code itā€™s based off is here. If you want something you can use directly in C, the Vala compiler can generate a library and header for you from the bins.gs file and some modificationā€¦ Or you can look into the DeepStream app code.

The Dockerfile I am using is here. Pretty much the only thing you would need to copy is the EXPOSE line if you use DeepStream app just make RTSP port used by DeepStream is the same.

I believe Nvidia also multicasts over port 5000/udp. Never tried that within Docker. The traffic is probably only visible to members of your docker networks unless you use --net host. You will have to experiment on that one. I have little idea how it behaves. If you use the multicast stream you can probably cut out GstRtspServer entirely and just use nginx container(s) to rebroadcast.

Hi mdegans. Sorry for the lack of clarity in my query. I am rephrasing it below.
I am seeing the following:
ā€˜libEGL warning: DRI2: failed to authenticateā€™
message while executing the test1 app.
The system has Ubuntu 18.04, nvidia (418+), cuda 10.1 and I am using DS4.0.2 docker container. The server has one T4 with intel chips. Even after installing nvidia drivers, the Ubuntu details screen shows llvmpipe as part of Graphics. But, nvidia-smi command has the same output as in above posts.
We have connected a monitor to VGA port.
Are we doing something wrong in our configuration ?
Please provide your valuable help.

Re: X11 via Docker. You will have to ask an Nvidia rep. They may support this configuration but I do not use it, sorry.

I also donā€™t have a T4 and am not sure it can provide a display the way you want. Some sort of network sink may be required even locally so your intel card can play back the video. Nvidia could assuredly provide better advice on this.

Thanks mdegans.
Requesting NVIDIA experts for help.

1 Like

Hi us.raghavender.efkon,
Please help to open a new topic for your issue, we will support you from there.
Thanks

1 Like

Thanks kaycc. I am starting a new topic.