Failed to run the official Deepstream 6.2 sample program( libEGL failed to authenticate)

I use docker container to run all this.
My Host:

  • Hardware Platform : x86_64 + RTX2080Ti
  • NVIDIA GPU Driver Version: 530.41.03

Docker Container:
base image: PyTorch 22.12 On NGC
The following components are included:

Build my Dockerfile via base image:
I wrote the following Dockerfile through the official guidelines to install deepstream 6.2.
Dockerfile (8.7 KB)
Docker run :

xhost +
docker run -itd --gpus all --name aiaio-22.12 --ipc=host -e DISPLAY=$DISPLAY --device /dev/snd -v /tmp/.X11-unix/:/tmp/.X11-unix aiaio:22.12

My x11 forwarding is fine because I can run xclock to display a clock on my host.

Issue:
When I run the official sample program, only a black window pops up without any content.(I think it’s a problem with the libEGL library, but I don’t know how to fix it)

root@ee30e84098b6:/opt/nvidia/deepstream/deepstream-6.2/samples/configs/deepstream-app# deepstream-app -c source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt
libEGL warning: DRI3: Screen seems not DRI3 capable
libEGL warning: DRI2: failed to authenticate
0:00:03.568952466   367 0x5638a1a23300 INFO                 nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger:<secondary_gie_2> NvDsInferContext[UID 6]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1909> [UID = 6]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.2/samples/configs/deepstream-app/../../models/Secondary_CarMake/resnet18.caffemodel_b16_gpu0_int8.engine
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 2
0   INPUT  kFLOAT input_1         3x224x224       
1   OUTPUT kFLOAT predictions/Softmax 20x1x1          

0:00:03.597210407   367 0x5638a1a23300 INFO                 nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger:<secondary_gie_2> NvDsInferContext[UID 6]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2012> [UID = 6]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.2/samples/configs/deepstream-app/../../models/Secondary_CarMake/resnet18.caffemodel_b16_gpu0_int8.engine
0:00:03.621300772   367 0x5638a1a23300 INFO                 nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<secondary_gie_2> [UID 6]: Load new model:/opt/nvidia/deepstream/deepstream-6.2/samples/configs/deepstream-app/config_infer_secondary_carmake.txt sucessfully
0:00:05.253186016   367 0x5638a1a23300 INFO                 nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger:<secondary_gie_1> NvDsInferContext[UID 5]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1909> [UID = 5]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.2/samples/configs/deepstream-app/../../models/Secondary_CarColor/resnet18.caffemodel_b16_gpu0_int8.engine
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 2
0   INPUT  kFLOAT input_1         3x224x224       
1   OUTPUT kFLOAT predictions/Softmax 12x1x1          

0:00:05.281197712   367 0x5638a1a23300 INFO                 nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger:<secondary_gie_1> NvDsInferContext[UID 5]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2012> [UID = 5]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.2/samples/configs/deepstream-app/../../models/Secondary_CarColor/resnet18.caffemodel_b16_gpu0_int8.engine
0:00:05.282464598   367 0x5638a1a23300 INFO                 nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<secondary_gie_1> [UID 5]: Load new model:/opt/nvidia/deepstream/deepstream-6.2/samples/configs/deepstream-app/config_infer_secondary_carcolor.txt sucessfully
0:00:06.898347201   367 0x5638a1a23300 INFO                 nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 4]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1909> [UID = 4]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.2/samples/configs/deepstream-app/../../models/Secondary_VehicleTypes/resnet18.caffemodel_b16_gpu0_int8.engine
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 2
0   INPUT  kFLOAT input_1         3x224x224       
1   OUTPUT kFLOAT predictions/Softmax 6x1x1           

0:00:06.927166384   367 0x5638a1a23300 INFO                 nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 4]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2012> [UID = 4]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.2/samples/configs/deepstream-app/../../models/Secondary_VehicleTypes/resnet18.caffemodel_b16_gpu0_int8.engine
0:00:06.928422698   367 0x5638a1a23300 INFO                 nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<secondary_gie_0> [UID 4]: Load new model:/opt/nvidia/deepstream/deepstream-6.2/samples/configs/deepstream-app/config_infer_secondary_vehicletypes.txt sucessfully
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
gstnvtracker: Batch processing is ON
gstnvtracker: Past frame output is ON
[NvMultiObjectTracker] Initialized
0:00:08.608975900   367 0x5638a1a23300 INFO                 nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1909> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.2/samples/configs/deepstream-app/../../models/Primary_Detector/resnet10.caffemodel_b4_gpu0_int8.engine
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input_1         3x368x640       
1   OUTPUT kFLOAT conv2d_bbox     16x23x40        
2   OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40         

0:00:08.637573388   367 0x5638a1a23300 INFO                 nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2012> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.2/samples/configs/deepstream-app/../../models/Primary_Detector/resnet10.caffemodel_b4_gpu0_int8.engine
0:00:08.638605999   367 0x5638a1a23300 INFO                 nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/opt/nvidia/deepstream/deepstream-6.2/samples/configs/deepstream-app/config_infer_primary.txt sucessfully

Runtime commands:
	h: Print this help
	q: Quit

	p: Pause
	r: Resume

NOTE: To expand a source in the 2D tiled display and view object details, left-click on the source.
      To go back to the tiled display, right-click anywhere on the window.


**PERF:  FPS 0 (Avg)	FPS 1 (Avg)	FPS 2 (Avg)	FPS 3 (Avg)	
**PERF:  0.00 (0.00)	0.00 (0.00)	0.00 (0.00)	0.00 (0.00)	
** INFO: <bus_callback:239>: Pipeline ready

cuGraphicsGLRegisterBuffer failed with error(304) gst_eglglessink_cuda_init texture = 1
** INFO: <bus_callback:225>: Pipeline running

**PERF:  758.05 (1.46)	758.05 (1.46)	758.05 (1.46)	631.71 (1.21)	
nvstreammux: Successfully handled EOS for source_id=2
nvstreammux: Successfully handled EOS for source_id=1
nvstreammux: Successfully handled EOS for source_id=0
nvstreammux: Successfully handled EOS for source_id=3
ERROR from secondary_gie_bin_queue: Internal data stream error.
Debug info: gstqueue.c(988): gst_queue_handle_sink_event (): /GstPipeline:pipeline/GstBin:secondary_gie_bin/GstQueue:secondary_gie_bin_queue:
streaming stopped, reason not-negotiated (-4)
Quitting
[NvMultiObjectTracker] De-initialized
App run failed

btw, I run deepstream6.1 on nvcr.io/nvidia/deepstream:6.1.1-samples image everything works fine.

Could you try to run the app in the nvcr.io/nvidia/deepstream:6.2-devel docker on your host and check if the EGL works well?

yes, it runs well. @yuweiw it’s something wrong with my dockerfile ? but I built it according to the official guide.

Any idea dear yuweiw :) ? I try to fix it myself manytimes but it doesn’t work.

Our Guide dockerfile is based on nvcr.io/nvidia/cuda:11.8.0-devel-ubuntu20.04, but your dockerfile is based on nvcr.io/nvidia/pytorch:22.12-py3. It may be some of the drivers,dependencies in your docker that does not match very well. Why don’t you just use our deepstream Docker to run deepstream?

Because I need to build a development environment with a complete process from training to inference, I don’t need such a heavy container if it’s just for deployment. I think this image nvcr.io/nvidia/pytorch:22.12-py3 is officially built by Nvidia and should be able to adapt to deepstream.

libEGL warning: DRI3: Screen seems not DRI3 capable
libEGL warning: DRI2: failed to authenticate

Just from the log, it may be a DRI issue in your docker environment. Cause This is deepstream forum, we have never encountered a similar situation in our deepstream docker. We sugget you use deepstream docker to install other software or you can try to check how to adapt the DRI in your docker by yourself.

This time, I followed the official guide to build the dockerfile exactly, note that I copied thisdockerfile exactly and I didn’t install any other software, but it still returned the same error, I think maybe this tutorial is a bit imperfect.

Is there a possibility that because the graphics driver is not adapted ?

It could be. We usually recommend that should following the configuration below:
https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_Quickstart.html#id6
You can also try to set the -e DISPLAY -v /tmp/.X11-unix:/tmp/.X11 para when you start the docker.

everything was right but driver,my driver is not R525.85.12, should I downgraded ?

Yes, you can try that. It could be cuda_ EGL version mismatch.

I ran the sample application on a windows device with the official docker image
nvcr.io/nvidia/deepstream:6.2-devel
and it didn’t work either , even though I was using fakesink, maybe because I always seek to use the latest driver, this is not a wise choice, I should consider using a more stable version of the driver. Thank you for your patience anyway!

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.