Cannot run deepstream-test1-app on NX with image pulling from NGC.(nvcr.io/nvidia/deepstream-l4t:5.1-21.02-samples)

Hardware Platform (Jetson / GPU)
Jetson NX

• DeepStream Version
5.1

• JetPack Version (valid for Jetson only)
4.5-b129

• TensorRT Version
7.1.3

• Issue Type( questions, new requirements, bugs)
After

sudo docker run -it --rm --net=host --runtime nvidia --device=/dev/video0:/dev/video0 -w /opt/nvidia/deepstream/deepstream-5.1 -v /tmp/.X11-unix/:/tmp/.X11-unix nvcr.io/nvidia/deepstream-l4t:5.1-21.02-samples

and run

deepstream-test1-app samples/streams/sample_720p.h264

got

Now playing: /opt/nvidia/deepstream/deepstream-5.1/samples/streams/sample_720p.h264

Using winsys: x11 
Opening in BLOCKING MODE
Opening in BLOCKING MODE 
ERROR: Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-5.1/sources/apps/sample_apps/deepstream-test1/../../../../samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine open error
0:00:03.520667466    36     0x3210d6d0 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1691> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-5.1/sources/apps/sample_apps/deepstream-test1/../../../../samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine failed
0:00:03.520952664    36     0x3210d6d0 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1798> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-5.1/sources/apps/sample_apps/deepstream-test1/../../../../samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine failed, try rebuild
0:00:03.521014132    36     0x3210d6d0 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1716> [UID = 1]: Trying to create engine from model files
INFO: [TRT]: Reading Calibration Cache for calibrator: EntropyCalibration2
INFO: [TRT]: Generated calibration scales using calibration cache. Make sure that calibration cache has latest scales.
INFO: [TRT]: To regenerate calibration cache, please delete the existing one. TensorRT will generate a new calibration cache.
INFO: [TRT]: 
INFO: [TRT]: --------------- Layers running on DLA: 
INFO: [TRT]: 
INFO: [TRT]: --------------- Layers running on GPU: 
INFO: [TRT]: conv1 + activation_1/Relu, block_1a_conv_1 + activation_2/Relu, block_1a_conv_2, block_1a_conv_shortcut + add_1 + activation_3/Relu, block_2a_conv_1 + activation_4/Relu, block_2a_conv_2, block_2a_conv_shortcut + add_2 + activation_5/Relu, block_3a_conv_1 + activation_6/Relu, block_3a_conv_2, block_3a_conv_shortcut + add_3 + activation_7/Relu, block_4a_conv_1 + activation_8/Relu, block_4a_conv_2, block_4a_conv_shortcut + add_4 + activation_9/Relu, conv2d_cov, conv2d_cov/Sigmoid, conv2d_bbox, 

INFO: [TRT]: Detected 1 inputs and 2 output network tensors.
0:02:15.461580879    36     0x3210d6d0 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1749> [UID = 1]: serialize cuda engine to file: /opt/nvidia/deepstream/deepstream-5.1/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine successfully
INFO: [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input_1         3x368x640       
1   OUTPUT kFLOAT conv2d_bbox     16x23x40        
2   OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40         

0:02:15.888361932    36     0x3210d6d0 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary-nvinference-engine> [UID 1]: Load new model:dstest1_pgie_config.txt sucessfully
Running...
NvMMLiteOpen : Block : BlockType = 261 
NVMEDIA: Reading vendor.tegra.display-size : status: 6 
NvMMLiteBlockCreate : Block : BlockType = 261 
Frame Number = 0 Number of objects = 5 Vehicle Count = 3 Person Count = 2
0:02:16.356050306    36     0x32105370 WARN                 nvinfer gstnvinfer.cpp:1984:gst_nvinfer_output_loop:<primary-nvinference-engine> error: Internal data stream error.
0:02:16.356102848    36     0x32105370 WARN                 nvinfer gstnvinfer.cpp:1984:gst_nvinfer_output_loop:<primary-nvinference-engine> error: streaming stopped, reason error (-5)
ERROR from element primary-nvinference-engine: Internal data stream error.
Error details: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(1984): gst_nvinfer_output_loop (): /GstPipeline:dstest1-pipeline/GstNvInfer:primary-nvinference-engine:
streaming stopped, reason error (-5)
Returned, stopping playback
Deleting pipeline

Anyone has experiences on this issue? Please share some comments with me.

Hi,
Please make sure you run below first before you run sample with sink type nveglglessink
export DISPLAY=:0 or 1
xrandr // to check if display exported successfully

Hi @amycao,

After the setting

export DISPLAY=:0 or 1

and run xrandr, got

No protocol specified
Can't open display :0 (or 1)

and not working for running

deepstream-test1-app /opt/nvidia/deepstream/deepstream-5.1/samples/streams/sample_720p.h264 


No protocol specified
nvbuf_utils: Could not get EGL display connection
No protocol specified
No EGL Display

Also, I found that it originally can run the following example app, before setting export DISPLAY=:0 or 1

deepstream-app -c samples/configs/deepstream-app/source1_usb_dec_infer_resnet_int8.txt

But after setting that, this app cannot run either, got error as follows

No protocol specified
nvbuf_utils: Could not get EGL display connection
No protocol specified
No EGL Display 
nvbufsurftransform: Could not get EGL display connection

Before the setting it would work.
Any comments on this?

Do you have real display (e.g. HDMI TV) connected on NX?

If there is HDMI connected, as Amy said above

  1. export DISPLAY=:0 or 1
    xrandr // to check if display exported successfully
  2. Run docker with “xhost +” and “-e DISPLAY=$DISPLAY” so that docker can use the display
    $ xhost +
    $ sudo docker run -it --rm --net=host --runtime nvidia --device=/dev/video0:/dev/video0 -e DISPLAY=$DISPLAY -w /opt/nvidia/deepstream/deepstream-5.1 -v /tmp/.X11-unix/:/tmp/.X11-unix nvcr.io/nvidia/deepstream-l4t:5.1-21.02-samples

If there isn’t HDMI connected,

  1. Run docker without “-e DISPLAY=$DISPLAY” and “xhost +”
    $ sudo docker run -it --rm --net=host --runtime nvidia --device=/dev/video0:/dev/video0 -w /opt/nvidia/deepstream/deepstream-5.1 -v /tmp/.X11-unix/:/tmp/.X11-unix nvcr.io/nvidia/deepstream-l4t:5.1-21.02-samples

  2. for sample - deepstream-test1-app which by default render the output to display, need below change to output to fakesink

    diff --git a/deepstream_test1_app.c b/deepstream_test1_app.c
    index ebe48b0…843dc9c 100644
    — a/deepstream_test1_app.c
    +++ b/deepstream_test1_app.c
    @@ -207,7 +207,8 @@ main (int argc, char *argv)
    if(prop.integrated) {
    transform = gst_element_factory_make (“nvegltransform”, “nvegl-transform”);
    }
    - sink = gst_element_factory_make (“nveglglessink”, “nvvideo-renderer”);
    + //sink = gst_element_factory_make (“nveglglessink”, “nvvideo-renderer”);
    + sink = gst_element_factory_make (“fakesink”, “nvvideo-renderer”);

    if (!source || !h264parser || !decoder || !pgie
        || !nvvidconv || !nvosd || !sink) {
    

    @@ -244,7 +245,7 @@ main (int argc, char *argv)
    if(prop.integrated) {
    gst_bin_add_many (GST_BIN (pipeline),
    source, h264parser, decoder, streammux, pgie,
    - nvvidconv, nvosd, transform, sink, NULL);
    + nvvidconv, nvosd, /transform,/ sink, NULL);
    }
    else {
    gst_bin_add_many (GST_BIN (pipeline),
    @@ -287,7 +288,7 @@ main (int argc, char *argv)

    if(prop.integrated) {
      if (!gst_element_link_many (streammux, pgie,
       -        nvvidconv, nvosd, transform, sink, NULL)) {
      +        nvvidconv, nvosd, /*transform, */sink, NULL)) {
         g_printerr ("Elements could not be linked: 2. Exiting.\n");
        return -1;
      }

Hi @mchi ,

I have a HDMI monitor connect to my NX, and here is the link (The DeepStream image nvcr.io/nvidia/deepstream-l4t:5.1-21.02-samples pulled from NGC to my NVIDIA NX failed to start any application - #23 by a0975003518) the issue I encountered that using

$ xhost +
$ sudo docker run -it --rm --net=host --runtime nvidia --device=/dev/video0:/dev/video0 -e DISPLAY=$DISPLAY -w /opt/nvidia/deepstream/deepstream-5.1 -v /tmp/.X11-unix/:/tmp/.X11-unix nvcr.io/nvidia/deepstream-l4t:5.1-21.02-samples

that’s why I use the command without “-e DISPLAY=$DISPLAY”

sudo docker run -it --rm --net=host --runtime nvidia --device=/dev/video0:/dev/video0 -w /opt/nvidia/deepstream/deepstream-5.1 -v /tmp/.X11-unix/:/tmp/.X11-unix nvcr.io/nvidia/deepstream-l4t:5.1-21.02-samples

if you have HDMI connected, as Amy said, you should be able to get valid output from xrandr as below

I got this.

xrandr

No protocol specified
Can't open display :0

Make sure you login to the desktop.

Hi @mchi ,

Does this solution work for docker image? Since I am using the image pulling from NGC.(nvcr.io/nvidia/deepstream-l4t:5.1-21.02-samples)](Cannot run deepstream-test1-app on NX with image pulling from NGC.(nvcr.io/nvidia/deepstream-l4t:5.1-21.02-samples))

Do I need to try this in the container or outside container(on the host)?

Hi @amycao ,

What does this mean?
if it means being a root, then yes!

yes, since you have HDMI connected, you need to find out a valid “DISPLAY” outside of docker by

export DISPLAY=:0 or 1
xrandr // to check if display exported successfully

but as you said above, neither “DISPLAY=:0” or “DISPLAY=:1” work for you, it’s weird. You could try Amy’s suggestion - login the UBuntu desktop.

=====================================
If there is HDMI connected, as Amy said above

1. export DISPLAY=:0 or 1
xrandr // to check if display exported successfully
2. Run docker with “xhost +” and “-e DISPLAY=$DISPLAY” so that docker can use the display
$ xhost +
$ sudo docker run -it --rm --net=host --runtime nvidia --device=/dev/video0:/dev/video0 -e DISPLAY=$DISPLAY -w /opt/nvidia/deepstream/deepstream-5.1 -v /tmp/.X11-unix/:/tmp/.X11-unix [nvcr.io/nvidia/deepstream-l4t:5.1-21.02-samples](http://nvcr.io/nvidia/deepstream-l4t:5.1-21.02-samples)

Hi @mchi ,

OK, it works! At first time I tried it in a container… then I tried on my host. It works!
Many thanks for your support!