Nvinfer plugin requires EGL context on system with no display interface

We have an Nvidia Xavier NX with custom carrier board, without a display interface (pins are not connected to anything), and when I run a Gstreamer pipeline that uses the “nvinfer” plugin, I get failures with regards to EGL context. We are using a custom installation based on L4T 35.1.0. Here is the example pipeline:

gst-launch-1.0 videotestsrc ! nvvideoconvert ! nvinfer config-file-path=/home/ubuntu/temp_mount/DeepStream-Yolo/config_infer_primary_yoloV8.txt ! fakesink

Here is the debug output, when running the above pipeline:

ubuntu@er-mbu:/opt/nvidia/deepstream/deepstream/sources/apps$ GST_DEBUG=3 gst-launch-1.0 videotestsrc ! nvvideoconvert ! nvinfer config-file-path=/home/ubuntu/temp_mount/DeepStream-Yolo/config_infer_primary_yoloV8.txt ! fakesink
nvbufsurftransform: Could not get EGL display connection
Setting pipeline to PAUSED …
WARNING: [TRT]: Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
0:00:04.770204640 3252 0xaaaabddc0720 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1988> [UID = 1]: deserialized trt engine from :/home/ubuntu/temp_mount/DeepStream-Yolo/model_b1_gpu0_fp32.engine
WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
INFO: [Implicit Engine Info]: layers num: 4
0 INPUT kFLOAT input 3x640x640
1 OUTPUT kFLOAT boxes 8400x4
2 OUTPUT kFLOAT scores 8400x1
3 OUTPUT kFLOAT classes 8400x1

0:00:04.827289952 3252 0xaaaabddc0720 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2091> [UID = 1]: Use deserialized engine model: /home/ubuntu/temp_mount/DeepStream-Yolo/model_b1_gpu0_fp32.engine
nvbufsurface: Could not get EGL display connection
nvbufsurface: Can’t get EGL display
0:00:04.837302816 3252 0xaaaabddc0720 ERROR nvinferallocator gstnvinfer_allocator.cpp:102:gst_nvinfer_allocator_alloc: Error: Could not map EglImage from NvBufSurface for nvinfer
0:00:04.837356512 3252 0xaaaabddc0720 WARN GST_BUFFER gstbuffer.c:951:gst_buffer_new_allocate: failed to allocate 88 bytes
0:00:04.837408256 3252 0xaaaabddc0720 WARN bufferpool gstbufferpool.c:305:do_alloc_buffer: alloc function failed
0:00:04.837462144 3252 0xaaaabddc0720 WARN bufferpool gstbufferpool.c:338:default_start: failed to allocate buffer
0:00:04.837489312 3252 0xaaaabddc0720 ERROR bufferpool gstbufferpool.c:559:gst_buffer_pool_set_active: start failed
0:00:04.837524096 3252 0xaaaabddc0720 WARN nvinfer gstnvinfer.cpp:994:gst_nvinfer_start: error: Failed to set buffer pool to active
0:00:04.853120992 3252 0xaaaabddc0720 WARN GST_PADS gstpad.c:1142:gst_pad_set_active:nvinfer0:sink Failed to activate pad
ERROR: Pipeline doesn’t want to pause.
ERROR: from element /GstPipeline:pipeline0/GstNvInfer:nvinfer0: Failed to set buffer pool to active
Additional debug info:
/dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(994): gst_nvinfer_start (): /GstPipeline:pipeline0/GstNvInfer:nvinfer0
Setting pipeline to NULL …
Freeing pipeline …

Why does “nvinfer” need an EGL context, when I’m not trying to display anything? How can this work without a display?

Hi,
The gstreamer command looks not correct. You would need to have nvstreammux before nvinfer plugin. We would suggest try config file of deepstream-app. You can change the default eglsink to fakesink by modifying the config file and give it a try.

Hello,
Thank you for your reply. I actually started with the deepstream-app on an Orin AGX dev kit and everything works fine. I then transferred the same setup over to our Xavier NX with custom carrier and no display and then started to see these EGL issues. So, the command I provided is a simplified version of a similar issue I see with the deepstream-app. I noticed that with the deepstream-app, the PGIE is what causes the problem, but when I disable it, I no longer see the EGL issue. So, this is why I am just using a videotestsrc, with nvinfer, and a fakesink. I will try your suggestion with the nvstreammux, but my original implementation with deepstream-app does have the nvstreammux, just called “streammux” inside my deepstream_app_config.txt file, and I still see the same EGL issue.

I just noticed that even just doing a “gst-inspect-1.0 nvstreammux”, “gst-inspect-1.0 nvinfer”, “gst-inspect-1.0 nvvideoconvert”, or a “gst-inspect-1.0 nvvidconv” prints out a message at the beginning of the output “nvbufsurftransform: Could not get EGL display connection.” So, I think if we can figure out why this is happening, we can solve the overall problem.

Here is another example, using an “nvstreammux” before the “nvinfer” but with an rtspsrc, since the stream mux does not seem to like the “videotestsrc”. It shows the same exact problem.

ubuntu@er-mbu:~$ gst-launch-1.0 rtspsrc location=rtsp://172.26.1.32:554/stream0 ! nvstreammux batch-size=1 width=1280 height=720 ! nvinfer config-file-path=/home/ubuntu/temp_mount/DeepStream-Yolo/config_infer_primary_yoloV8.txt ! fakesink
nvbufsurftransform: Could not get EGL display connection
Setting pipeline to PAUSED …
WARNING: [TRT]: Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
0:00:05.305251936 6045 0xaaaadde1f530 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1988> [UID = 1]: deserialized trt engine from :/home/ubuntu/temp_mount/DeepStream-Yolo/model_b1_gpu0_fp32.engine
WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
INFO: [Implicit Engine Info]: layers num: 4
0 INPUT kFLOAT input 3x640x640
1 OUTPUT kFLOAT boxes 8400x4
2 OUTPUT kFLOAT scores 8400x1
3 OUTPUT kFLOAT classes 8400x1

0:00:05.362272992 6045 0xaaaadde1f530 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2091> [UID = 1]: Use deserialized engine model: /home/ubuntu/temp_mount/DeepStream-Yolo/model_b1_gpu0_fp32.engine
nvbufsurface: Could not get EGL display connection
nvbufsurface: Can’t get EGL display
0:00:05.370718464 6045 0xaaaadde1f530 WARN nvinfer gstnvinfer.cpp:994:gst_nvinfer_start: error: Failed to set buffer pool to active
ERROR: Pipeline doesn’t want to pause.
ERROR: from element /GstPipeline:pipeline0/GstNvInfer:nvinfer0: Failed to set buffer pool to active
Additional debug info:
/dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(994): gst_nvinfer_start (): /GstPipeline:pipeline0/GstNvInfer:nvinfer0
Setting pipeline to NULL …
Freeing pipeline …

Another piece of information is that I tested this inside the “nvcr.io/nvidia/deepstream:6.3-triton-multiarch” docker on both the Orin AGX dev kit and my Xavier NX with custom carrier (no display), and it works on the Orin AGX, but I get the same EGL failure on the Xavier NX.

For DeepStream SDK release issue, suggest to open the topic at Latest Intelligent Video Analytics/DeepStream SDK topics - NVIDIA Developer Forums
Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.