Deepstream pipeline does not work without EGL Display

Platform used: AGX Xavier
Jetpack: 5.0.2
L4T: R35 (release), REVISION: 1.0, GCID: 31346300, BOARD: t186ref, EABI: aarch64, DATE: Thu Aug 25 18:41:45 UTC 2022
Base container I’m using : nvcr.io/nvidia/deepstream-l4t:6.1.1-base
Problem:

I am trying to run a pipeline from a Xavier that will not have a native display attached to it (currently I’m using ssh with X session forwarding for development)

I have a modified version of ds-example-1 that sends the output to a fakesink

However I would like to get the output via ssh -X while I’m working on it. but I am having problems when trying to get the output turn over via the SSH link.

To understand what is happening I turned off the output (by using a fakesink) but still kept the DISPLAY variable to my remote display. (it is working fine, I’ve tested with xeyes and other gstreamer pipelines, also everything is working fine when I set the DISPLAY variable to the value that points to the EGL Display)

With the Display off I still see.

No EGL Display 
nvbufsurftransform: Could not get EGL display connection

This probably means that there is some other element that relies on the DISPLAY variable or the fact that it is natively connected to the Jetson.

I did a very basic test by adding some comments to get a hint.

my error output is

Creating Pipeline 
 
Creating Source 
 
Creating H264Parser 

Creating Decoder 

No EGL Display 
nvbufsurftransform: Could not get EGL display connection
Creating StreamMux 

Creating nvinfer 

Creating nvvideoconvert 

Creating nvdsosd 

Creating queue 

Creating fakesink 

Playing file ../../samples/streams/sample_720p.h264 
Adding elements to Pipeline 

Linking elements in the Pipeline 

Starting pipeline 

nvbuf_utils: Could not get EGL display connection
nvbuf_utils: Could not get EGL display connection
Opening in BLOCKING MODE 
0:00:00.394424741 103453     0x20f89210 WARN                 nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1170> [UID = 1]: Warning, OpenCV has been deprecated. Using NMS for clustering instead of cv::groupRectangles with topK = 20 and NMS Threshold = 0.5
0:00:03.455796829 103453     0x20f89210 INFO                 nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1909> [UID = 1]: deserialized trt engine from :/root/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
INFO: [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input_1         3x368x640       
1   OUTPUT kFLOAT conv2d_bbox     16x23x40        
2   OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40         

0:00:03.493828920 103453     0x20f89210 INFO                 nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2012> [UID = 1]: Use deserialized engine model: /root/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
nvbufsurface: eglGetDisplay failed with error 0x3000
nvbufsurface: Can't get EGL display
0:00:03.510444331 103453     0x20f89210 WARN                 nvinfer gstnvinfer.cpp:943:gst_nvinfer_start:<primary-inference> error: Failed to set buffer pool to active
Error: gst-resource-error-quark: Failed to set buffer pool to active (1): /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(943): gst_nvinfer_start (): /GstPipeline:pipeline0/GstNvInfer:primary-inference
root@nvidia-desktop:~/jetson-deepstream/scripts# gst-inspect-1.0 nvstreammux
No EGL Display 
nvbufsurftransform: Could not get EGL display connection

I think the lines (not super confidant though)

No EGL Display 
nvbufsurftransform: Could not get EGL display connection

are due to the lines below (I’ve attached the full script too)

# Use nvdec_h264 for hardware accelerated decode on GPU
    print("Creating Decoder \n")
    decoder = Gst.ElementFactory.make("nvv4l2decoder", "nvv4l2-decoder")
    if not decoder:
        sys.stderr.write(" Unable to create Nvv4l2 Decoder \n")

    print("Creating StreamMux \n")
    # Create nvstreammux instance to form batches from one or more sources.
    streammux = Gst.ElementFactory.make("nvstreammux", "Stream-muxer")
    if not streammux:
        sys.stderr.write(" Unable to create NvStreamMux \n")

full script (10.0 KB)

I’m guessing the problem is caused by either nvv4l2decoder or nvstreammux (unless the output and the execution are way off step)

My main questions are.

  • Are there other elements that require EGL Display connection other than nveglglessink
  • Is there a way to go around that (get the NN bits to use the Jetson TPU and rest stay out of display hardware and stuff related to EGL)

Cheers,
Ganindu.

Please let me know if you need further clarification on anything.

There is a physical monitor needed for nveglglessink. Can you try to use rtsp output for display remotely?

Hi Fiona,
Thanks for getting back to me, Yes rtsp works but I don’t want to use it for this particular application. As I’ve mentioned I don’t want to use nveglelessink either (because I don’t want to use a monitor hooked onto the Jetson)

Let me give more context to why I asked the question…

I actually wanted the pipeline to work with a sink that I can access via SSH (with X session forwarding enabled) I managed to get xvimagesink working (without DeepStream elements in the pipeline) inside the docker container by adding

lib, /usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstxvimagesink.so
lib, /lib/aarch64-linux-gnu/libXv.so.

to the l4t.csv file.

furthermore;

gst-launch-1.0 v4l2src device=/dev/video0 !  capsfilter caps="video/x-raw, width=1920,height=1080" !  queue   ! nvvideoconvert ! autovideosink

works for me via ssh -X (I can see the output on my remote display.

The problem that lead to this question occurs when I try yo get a pipeline that has DeepStream elements to work.

To begin with I hoped that if I replaced egltransform and eglglessink with fakesink it would work with ssh. but I noticed that it only works if I set the $DISPLAY environment variable to that of the display attached to the Jetson despite not having an element like “eglglessink”.

Then I thought if I gst-inspect ed I’d find some config key that would help me and wasn’t lucky with that too.

so basically at this point I want to know why my pipeline that is

h264_filesource > h264parser > decoder > streammux > pgie > nvvidconv > nvosd > queue ( optional I think ) > fakesink

is complaining about an EGL display connection when I haven’t even used eglglessink or anything that has “egl” in it. (I get the same errors regardless of using Docker)

but somehow it works in the same terminal when I export the $DISPLAY variable for the native display.

Is there a way to get around this? (like changing some config key or using different elements)

I will list the full error output below again for reference.

Output when used the $DISPLAY =:1 (native display)

Creating Pipeline 
 
Creating Source 
 
Creating H264Parser 

Creating Decoder 

Creating StreamMux 

Creating nvinfer 

Creating nvvideoconvert 

Creating nvdsosd 

Creating queue 

Creating fakesink 

Playing file ../../samples/streams/sample_720p.h264 
Adding elements to Pipeline 

Linking elements in the Pipeline 

Starting pipeline 

Opening in BLOCKING MODE 
0:00:00.412867829 23137 0xaaab0a0a6b50 WARN                 nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1170> [UID = 1]: Warning, OpenCV has been deprecated. Using NMS for clustering instead of cv::groupRectangles with topK = 20 and NMS Threshold = 0.5
WARNING: [TRT]: Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
0:00:03.472019368 23137 0xaaab0a0a6b50 INFO                 nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1909> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.1/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
INFO: [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input_1         3x368x640       
1   OUTPUT kFLOAT conv2d_bbox     16x23x40        
2   OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40         

0:00:03.518155611 23137 0xaaab0a0a6b50 INFO                 nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2012> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.1/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
0:00:03.527030328 23137 0xaaab0a0a6b50 INFO                 nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<primary-inference> [UID 1]: Load new model:dstest1_pgie_config.txt sucessfully
NvMMLiteOpen : Block : BlockType = 261 
NVMEDIA: Reading vendor.tegra.display-size : status: 6 
NvMMLiteBlockCreate : Block : BlockType = 261 
Frame Number=0 Number of Objects=10 Vehicle_count=6 Person_count=4
Frame Number=1 Number of Objects=8 Vehicle_count=5 Person_count=3
Frame Number=2 Number of Objects=7 Vehicle_count=4 Person_count=3
Frame Number=3 Number of Objects=10 Vehicle_count=5 Person_count=5
Frame Number=4 Number of Objects=8 Vehicle_count=4 Person_count=4
Frame Number=5 Number of Objects=7 Vehicle_count=5 Person_count=2
Frame Number=6 Number of Objects=10 Vehicle_count=6 Person_count=4
Frame Number=7 Number of Objects=10 Vehicle_count=7 Person_count=3
.......................................................................................................................................................
.......................................................................................................................................................

output when using DISPLAY=localhost:10.0 (remote display)

Creating Pipeline 
 
Creating Source 
 
Creating H264Parser 

Creating Decoder 

libEGL warning: DRI3: failed to query the version
libEGL warning: DRI2: failed to authenticate
Creating StreamMux 

Creating nvinfer 

Creating nvvideoconvert 

Creating nvdsosd 

Creating queue 

Creating fakesink 

Playing file ../../samples/streams/sample_720p.h264 
Adding elements to Pipeline 

Linking elements in the Pipeline 

Starting pipeline 

Opening in BLOCKING MODE 
0:00:00.471864432 23396 0xaaaadf0c7b50 WARN                 nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1170> [UID = 1]: Warning, OpenCV has been deprecated. Using NMS for clustering instead of cv::groupRectangles with topK = 20 and NMS Threshold = 0.5
WARNING: [TRT]: Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
0:00:03.565633382 23396 0xaaaadf0c7b50 INFO                 nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1909> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.1/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
INFO: [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input_1         3x368x640       
1   OUTPUT kFLOAT conv2d_bbox     16x23x40        
2   OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40         

0:00:03.610230232 23396 0xaaaadf0c7b50 INFO                 nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2012> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.1/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
nvbufsurface: Failed to create EGLImage.
0:00:03.627310005 23396 0xaaaadf0c7b50 WARN                 nvinfer gstnvinfer.cpp:943:gst_nvinfer_start:<primary-inference> error: Failed to set buffer pool to active
Error: gst-resource-error-quark: Failed to set buffer pool to active (1): /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(943): gst_nvinfer_start (): /GstPipeline:pipeline0/GstNvInfer:primary-inference

Hi

As far as I know, you can’t run a DeepStream pipeline in ssh -X. It is possible to use some of NVIDIA’s elements like decoders and nvvidconv without an EGL display, but nvvideoconvert throws a critical error and stops instead. Since nvvideoconvert is needed for DeepStream and can’t be replaced by nvvidconv, you can’t run DeepStream pipelines with X11 forwarding enabled.

I would suggest you to use an streaming protocol to display the results instead:

1 Like

Hi @miguel.taylor,

Thanks a lot for the reply, that is what I thought as well until I just randomly got a ssh -X preview pipeline to work with nvvideoconvert (It was too tempting to let go as It’s much easier to take my laptop to vehicles rather than the usual fiddling with the fiddly keyboard. mouse and display haha)

ssh_x_960_540

The command string I’ve used over ssh -X

gst-launch-1.0 v4l2src device=/dev/video0 !  capsfilter caps="video/x-raw, width=960,height=540" !  queue   ! nvvideoconvert ! autovideosink

Note: This s a pipeline (with ssh -X) that has no GStreamer Elements.
Maybe the EGL stuff does not get used unless a pgie element is used?

Cheers,
Ganindu.

P.S.

The GIF I uploaded is slow and small (might need to zoom to see the command) but it works nice and smooth IRL.

EDIT:

Just realised that the pipeline works with just

gst-launch-1.0 v4l2src device=/dev/video0 !  capsfilter caps="video/x-raw, width=960,height=540"  ! autovideosink

or

gst-launch-1.0 v4l2src device=/dev/video0 !  capsfilter caps="video/x-raw, width=960,height=540"  ! xvimagesink

or even

gst-launch-1.0 v4l2src device=/dev/video0  ! xvimagesink

So I think your point still stands! (basically the queue and nvvideoconvert are redundant therefore gstreamer optimises by bypassing them?

1 Like

Yes, I think in the case of that pipeline, nvvideoconvert is bypassing the buffer without processing. So it is not using EGL.

As I mentioned, I would recommend using streaming instead. The easier way is with UDP, here is an example:

Producer (Jetson):

HOST='192.168.0.107'
PORT=5004
gst-launch-1.0 \
videotestsrc ! \
nvvideoconvert ! \
nvv4l2h264enc insert-sps-pps=true  iframeinterval=5 control-rate=1 maxperf-enable=true name=encoder ! \
h264parse config-interval=5 ! \
udpsink port=$PORT host=$HOST sync=false

Consumer(PC):

HOST='192.168.0.107'
PORT=5004
gst-launch-1.0 \
udpsrc address=$HOST port=$PORT ! \
h264parse ! avdec_h264 ! \
videoconvert ! autovideosink
1 Like

Thanks a lot Miguel! Appreciate you for the help!

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.