Using EGLSink with Kubernetes-deployed IVA

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
dGPU running on Ubuntu 20.04.4
• DeepStream Version
6.1.1 (also tried 6.0.1)
• JetPack Version (valid for Jetson only)
• TensorRT Version
8.4.1.5
• NVIDIA GPU Driver Version (valid for GPU only)
515.86.01
• Issue Type( questions, new requirements, bugs)
Question
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)

I’ve noticed that even after running xhost + and running a K8s deployment that includes the DeepStream demo image that I see “EGL call returned error 3000” and “Couldn’t setup window/surface from handle” errors in the logs. This is with an EGL sink (with a fake sink, I get no errors). This is also running with the -t flag.

Is there a way to have the EGL display/sink work with K8s?

• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Is there physical monitor connected with your dGPU device? How did you run the k8s deployment?

Yes there is. I’ve used the same machine to run the application using Docker previously.

The K8s deployment is run via Helm, using a Python script to start the application.

What we know to run nveglglessink in docker is to run “xhost+” before “docker run”.

Yes, I know that, but I’m using Kubernetes, which uses a script to configure and run the deepstream-app demo application in a K8s pod. I’ve run xhost + before executing the Helm script but I get the error I originally listed.

I said I previously ran the application via Docker. I’m no longer using Docker (other than to build the container image). K8s is orchestrating and running the deployment.

Following up here. Any ideas on how I can get the EGL sink to work with K8s, given what’s already been said above?