Headless configurations with remote openGL accelerated desktop - Jetson Xavier

Hey there, there are a few post on this topic but I haven’t been able to come up with a recipe that works for our use case. Basically we have a Jetson Xavier NX that doesn’t have an attached display that we would like to run an application that require a desktop with openGL acceleration.

Sharing the main desktop over x11vnc or similar would work but the problem I have is that, unless a monitor is connected to one of the ports, x11 will only show the NVIDIA logo. The system is configured to auto login, as soon as I plug the monitor the desktop appears

I’ve tried to generate a xorg.conf that would allow a virtual display but I haven’t been able to get it to work. Here is a snippet of the config that attempts to create the virtual display

Section “Device”
Identifier “Tegra0”
Driver “nvidia”
Option “AllowEmptyInitialConfiguration” “true”
EndSection

Section “Screen”
Identifier “Screen0”
Device “Tegra0”
Monitor “Monitor0”
DefaultDepth 24
SubSection “Display”
Depth 24
Modes “1920x1080”
Virtual 1920 1080
EndSubSection
Option “AllowEmptyInitialConfiguration” “true”
EndSection

Looking at the xorg logfile difference between having a monitor and no monitor connected what appears to be the key entry that allows the desktop to become properly sharable (not just an nvidia logo) is

193.044] (II) NVIDIA(0): Setting mode “HDMI-0: nvidia-auto-select @1920x1080 +0+0 {AllowGSYNC=Off, ViewPortIn=1920x1080, ViewPortOut=1920x1080+0+0}”

Before that the mode is set II) NVIDIA(0): Setting mode “NULL”

Assuming this is what I am missing, is there a way to force the mode ? I’ve tried

Option "ConnectedMonitor" "DFP-0"
Option "CustomEDID" "DFP-0:/home/slacroix/xorg_debug/SAMSUNG_0.edid"

Thanks for the help

Hi,
Please try the method and see if it works in the use-case:
Jetson AGX Orin FAQ

Q: How to configure VNC w/o monitor connected for Jetson?

Hey thanks, this gets me to the desktop but using this method, glxheads or glxgear use the llvmpipe software render instead of the iGPU, my application is similar and requires hardware acceleration.

Would you have a recipe that allows X to use the GPU device instead ?

Thanks

Then just attach a dummy display:

This plug make the system think that there is a display attached so the GPU gets activated. No software required. These plugs are also available for HDMI and VGA, and with different resolutions encoded in the internal EEPROM.

fchk

Thanks, I wish unfortunately, I am not using a dev kit and that isn’t an option for my use case. My configurion is truly headless and has no ability to fake the HDMI connection with a hardware dongle

ok. Is this a custom carrier board without any display connector?

HDMI only requires an 24C02 EEPROM for the EDID data and a connection from HPD to VBUS on the HDMI connector. Maybe you can wire the circuit still into your carrier board.

See:

That’s right, this is a possible option yes thanks for sharing it. To your knowledge, it is confirmed that there is no software solution to force the driver to think there is a monitor there ?

You may see: X Window System — Jetson Linux Developer Guide documentation

I can’t guarantee that there are no software solutions. But as a developer of carrier boards adding an eeprom is much easier for me than messing with display drivers.

Thanks, I had a look and tried to use nvidia-xconfig to set mode, an edid file for the hdmi port and see if mode debug would give me a bit more information but not dices.

Any specifc area that you think I should look into that I could have missed ?

Sorry I haven’t been playing a case similar to yours since years ago.
Not sure, but maybe a Virtual GL software could help.
Someone else may better advise for this case. You may give a try to @fchkjwlsq 's proposal.

I’ve tried virtualGL but found the performance hit to be substantial. GLXspheres when from 1500 fps with v sync disabled to around 40 fps.

Thanks guys I appreciate the time and help to think through possible solutions. Keep them coming

If the frame rate is reduced by that much, then it is possible it is reverting to software rendering. If the Virtual GL server is using the Jetson’s GPU I would think it is faster. While logged in to the Virtual GL, I suggest go to the Jetson side, and find the X log related to this via:
ls -ltr 'Xorg.*.conf' | tail -n 1

Save a copy of that log and post it here. There should be mention of the NVIDIA module load into the graphical ABI. The log would look something like this:

[   500.327] (II) LoadModule: "nvidia"
[   500.328] (II) Loading /usr/lib/xorg/modules/drivers/nvidia_drv.so
[   500.330] (II) Module nvidia: vendor="NVIDIA Corporation"
[   500.330]    compiled for 4.0.2, module version = 1.0.0
[   500.330]    Module class: X.Org Video Driver

(this is from an older TX2, so versions would differ, but the layout of the log would be the same)

The Xorg session associated with the main desktop definitely seems to have the nvidia driver loaded would the turboVNC session have it’s own Xorg.X.log file ? If so I am having a hard time finding it …

While I keep looking for it, the next best thing I can offer is a printout of glxspheres that’s provided with VirtualGL, when I run it in a VirtualGL environement, it seems to pick up the right device so that would be indicating it’s not using the software renderer ?

Running glxspheres on the maindesktop with a monitor connected and vsync disabled, this runs at 1500 fps

image

To help understand the problem I’ll give some background first. It’ll seem to not matter, but it is the basis for the question.

X was designed to be a distributed system by a bunch of M.I.T. students. As such, the X server is on any PC or workstation, and what gets rendered is an interpretation of X “events”. They’re basically a table of numbers (and perhaps data arguments). When you type on a keyboard, or when a program updates graphics, one or more events are sent.

One type of remote software involves running an X application on a given computer, e.g., a Jetson, but redirecting events to another host (e.g., your host PC). In that case the GPU of the Jetson is never involved; the program itself sends and receives events between itself and the host PC (perhaps via ssh forwarding of X events, e.g., using the “ssh -X” or “ssh -Y” command and then running a GUI program on that command line).

Another type of remote software is essentially not remote at all, but does send and receive copies of X event results rather than events. This is a special server. The default server runs the result of events into a framebuffer which is expected to have a monitor physically attached to it, but the GUI program itself does not ever really care about that. The normal X server sees events and then updates the buffer, and if there is a monitor which happens to be watching the buffer, everyone is happy.

Now if we want to forward to another computer, but we want the work done on the Jetson, it means we have to use the Jetson’s framebuffer, but we can’t expect to have a monitor attached. That implies the X server itself needs the option to keep rendering even if there is no monitor. The first issue is that the framebuffer has to know what the monitor resolution is (and other things) before it can properly arrange a framebuffer (imagine if a missing monitor resulted in a single line of pixels that’s really long; or just defaulted to some antique device). This is what a virtual desktop server starts with: An ordinary X server, but an ability to artificially tell the server to “pretend” that it has a monitor of a given setting.

If you have that virtual monitor, then it becomes possible to send the results of events to another computer, and a special program on that other computer can display the results (it’s basically just compressed realtime pixel data). No event is ever sent in that case.

There is a log file for the Jetson side’s server if and only if the GPU there is what interprets the events. If this happens to be a virtual server, then there is a log for the virtual server (named based on the “$DISPLAY” context) on the Jetson with that server. If there is a regular server running on the Jetson, and one forwards events instead of the result of events, then there is no log on the Jetson. The host PC you are using at the time of receiving the data, if it uses X, will have its own X server log. This log will not tell you anything about the virtual desktop application…that is just another X application, e.g., it works just like Gimp or Blender. It is true that the virtual desktop client you run from the host PC could have its own log, but it won’t be an X log (X logs are about events, and configuration of the buffer).

So yes, on the Jetson end, there will be an Xorg.#.log for every server which runs. There can be multiple servers. It might be that a local monitor runs on one server, and the virtual server is different, whereby they both have logs; or it might be the monitor is plugged in such that the virtual server is the framebuffer the server services…then there will be only one log, and it won’t care whether or not the results of X rendering is also copied to your virtual desktop on another computer.

You might examine “echo $DISPLAY” in these cases:

  • A locally attached monitor and keyboard.
  • Your host PC, but not within the virtual desktop client (this is the host PC’s context, not the Jetson’s).
  • Your host PC, whereby you run this within a terminal that is itself within the virtual desktop client (this might match the locally attached monitor and keyboard; it depends on whether there is a second server versus sharing the local and remote via the same framebuffer).

Check “ls -ltr /var/log/Xorg.*.log”, and note these are sorted by time; if two logs are created at almost the same time, then it implies you have two servers.

Something important to note is that if you run locally only, then you know the GPU is being used locally, and this is what might limit frame rate. If you are running remotely, and via forwarding of events (e.g., “ssh -X” or “ssh -Y”), then you are limited by the host PC’s GPU, not the Jetson, and you are further limited by the networking between the two. If you are using a virtual desktop client, then the framerate is basically whatever the Jetson is doing, but the display on the host PC might be further limited by networking (if the Jetson updates 1500 frames per second, and all pixels change in each frame, then that is an enormous amount of data for a gigabit network…events are tiny so far as data size, whereas a framebuffer is a lot of data…compression helps, but compression depends on the particular data).

One question which always exists, if you have installed a virtual desktop server: Is the virtual server using the GPU, or is it instead reverting to software rendering? To answer that, from your virtual desktop on the host PC, you might run this command (from the virtual desktop or local to the Jetson you might need to “sudo apt-get install mesa-utils”):

glxinfo | egrep -i '(version|nvidia)'

The output of this command will change depending on whether the GPU is used or if it is software rendered. If it is an NVIDIA driver, you’re in good shape; if it is Mesa, or some other software version, then you’ll not get the performance until the virtual server is changed to use the GPU. Even if it is the NVIDIA GPU you might find the remote end is slower due to networking. Perhaps if you have a local monitor which is using the virtual desktop, and if that desktop shows the NVIDIA GPU being used, then that could be a lot higher frame rate than the virtual desktop client. Maybe.

Hey Linuxdev, thank you for the really high quality and thorough responses, I got sidetracked and had to put this on pause but plan on getting back to it this week and will report back. I also plan to do experiments on Orin and see if I get similar results

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.