Deepstream 5.0 samples can run natively on Nano, yet not inside Docker

I installed JetPack 4.4 on Nano and ran some Deepstream 5.0 samples successfully natively (outside Docker). Then I built a docker image from nvcr.io/nvidia/deepstream-l4t:5.0-dp-20.04-samples, tried to run test1 inside the container, it failed.

I also tried ssh to Nano, then no matter where I ran the test1 sample, it didn’t work.

I confirmed the X11 forwarding is working properly, running xclock natively via ssh and inside docker showed the graphic clock successfully.

Did I miss anything to make the sample support X11 forwarding or work inside docker?

root@nano-desktop:~/ds-dp-5.0/deepstream_python_v0.9.orig/python/apps/deepstream-test1# ./deepstream_test_1.py ../sample_720p.h264
Creating Pipeline

Creating Source

Creating H264Parser

Creating Decoder

libEGL warning: MESA-LOADER: failed to open swrast (search paths /usr/lib/aarch64-linux-gnu/dri:\$${ORIGIN}/dri:/usr/lib/dri)

libEGL warning: MESA-LOADER: failed to open swrast (search paths /usr/lib/aarch64-linux-gnu/dri:\$${ORIGIN}/dri:/usr/lib/dri)

nvbuf_utils: Could not get EGL display connection
 Unable to create NvStreamMux
 Unable to create pgie
 Unable to create nvvidconv
 Unable to create nvosd
Creating EGLSink

Playing file ../sample_720p.h264
Traceback (most recent call last):
  File "./deepstream_test_1.py", line 266, in <module>
    sys.exit(main(sys.argv))
  File "./deepstream_test_1.py", line 199, in main
    streammux.set_property('width', 1920)
AttributeError: 'NoneType' object has no attribute 'set_property'

glxinfo result is shown below.

glxinfo name of display: localhost:1.0 MESA-LOADER: failed to open swrast (search paths /usr/lib/aarch64-linux-gnu/dri:\${ORIGIN}/dri:/usr/lib/dri)
libGL error: failed to load driver: swrast
X Error of failed request: GLXBadContext
Major opcode of failed request: 149 (GLX)
Minor opcode of failed request: 6 (X_GLXIsDirect)
Serial number of failed request: 25
Current serial number in output stream: 24

I tried running the cuda samples, it’s successfully run natively, yet not inside docker.

Per this link, I did some tests on running nbody as below.

  1. Launched the docker locally (without ssh). It ran properly to show the result. “xhost” command must be executed prior to docker run.

sudo xhost +si:localuser:root sudo docker run --runtime nvidia --network host --rm -it -e DISPLAY=$DISPLAY -v /tmp/.X11-unix/:/tmp/.X11-unix nvcr.io/nvidia/deepstream-l4t:5.0-dp-20.04-samples

  1. Launched the docker within a ssh session. --volume="$HOME/.Xauthority:/root/.Xauthority:rw" must be added to enable x11 forwarding.

sudo xhost +si:localuser:root sudo docker run --runtime nvidia --network host --rm -it -e DISPLAY=$DISPLAY -v /tmp/.X11-unix/:/tmp/.X11-unix --volume="$HOME/.Xauthority:/root/.Xauthority:rw" nvcr.io/nvidia/deepstream-l4t:5.0-dp-20.04-samples

Something was blinking once, then disappeared. Below are the messages.

root@nano-desktop:/tmp/samples/5_Simulations/nbody# ./nbody
Run "nbody -benchmark [-numbodies=<numBodies>]" to measure performance.
	-fullscreen       (run n-body simulation in fullscreen mode)
	-fp64             (use double precision floating point values for simulation)
	-hostmem          (stores simulation data in host memory)
	-benchmark        (run benchmark to measure performance)
	-numbodies=<N>    (number of bodies (>= 1) to run in simulation)
	-device=<d>       (where d=0,1,2.... for the CUDA device to use)
	-numdevices=<i>   (where i=(number of CUDA devices > 0) to use for simulation)
	-compare          (compares simulation results running once on the default GPU and once on the CPU)
	-cpu              (run n-body simulation on the CPU)
	-tipsy=<file.bin> (load a tipsy model file for simulation)

NOTE: The CUDA Samples are not meant for performance measurements. Results may vary when GPU Boost is enabled.

debug1: client_input_channel_open: ctype x11 rchan 10 win 65536 max 16384
debug1: client_request_x11: request from 127.0.0.1 53134
debug1: x11_connect_display: $DISPLAY is launchd
debug1: channel 7: new [x11]
debug1: confirm x11
> Windowed mode
> Simulation data stored in video memory
> Single precision floating point simulation
> 1 Devices used for simulation
libGL error: unable to load driver: swrast_dri.so
libGL error: failed to load driver: swrast
Required OpenGL extensions missing.debug1: channel 7: FORCE input drain
debug1: channel 7: free: x11, nchannels 8

I think since the L4T docker is officially released, the Deepstream applications are supposed to be running inside docker via a ssh session, not only insider docker via local session, right?

Anyone has any clue why the DS applications didn’t work insider docker via ssh session?

Figured out " sudo xhost +si:localuser:root" needs to be run before launching docker.

Hi @bridge
In L4T docker download package - https://ngc.nvidia.com/catalog/containers/nvidia:deepstream-l4t , it mentioned that “xhost+” is required for allowing external applications to connect to the host’s X display.
Did you try “xhost+” ?