Looking for suggestions: EGL app running without desktop

I am trying to write simple EGL app and are running to an issue. This might be related to all Jetson products.
Detailed information can be found here

I am curious what your use-case is? Most of these APIs in the category of EGL are for graphics rendering. They tend to require binding to a server (either X or Vulkan, which might run on top of X these days until something dedicated is built).

First thing, thank you for taking your time to answer.

I have an idea that I am researching, that actually might work on lower end hardware like nano. But I have spare AGX Xavier.
I am just experimenting since I did developed full screen pp which uses /dev/fb0. So I like to posibly revive my old project.
So I am looking for to bypass Desktop for Kioas stype app.
I did use OpenGL on Desktop on different platforms all the way to SGI days. After doing some reading, looks like you can acheave hardware acceleration with out desktop running. Am I correct?
I ddi try to use Qt5 QPA to do minimal app to test if it is working jut to load png image and display it, just to test it.
I did tried exporting


But having issue. Maybe I did missunderstood something from my readings. Regardless of my issue, can this be acheave on my hardware with 5.1.2? Do I need to do any system modifications to enable something?

I am assuming all Jetpack have the same configurations on different hardware like nano? From what I can tell tegra_fb exists.

I am scratching my head a bit now :)


NVIDIA would have to answer for anything official. I can tell you though that OpenGL and OpenGLES require a context to render to. In some cases even CUDA needs this as the X server isn’t just for rendering…X also is where the NVIDIA driver plugs in as a dynamic load (you can search your “/var/log/Xorg.0.log” for “ABI”).

There are other cases where the GPU can be used directly for CUDA, but I could not tell you what is required.

Something to consider is that the X server is designed to listen to “events”. Those events translate from something like OpenGL/GLES via a driver for either the framebuffer (e.g., software rendered Mesa) or via a hardware accelerated driver (the NVDIA GPU driver as loaded into the X server). The events don’t even have to be rendered, but it provides an API around a buffer that just happens to be able to render. In most cases you are going to need a rendering context.

If you are using code or a library which works with events, and you don’t have an X server, then you probably need a virtual X server. The sort of thing used in remote desktop apps. The drivers won’t care if the client/server combination servicing the events actually connects to a monitor or not. I think QT will require a rendering context.

No doubt there are ways to talk directly to the GPU driver without some event-based API in front of it. I don’t know what that method is though, someone from NVIDIA would need to answer. Or you could install a virtual desktop server which is able to load the NVIDIA driver.