Unable to increase resolution in headless display on Orin NX

I have an Orin NX on an Connect Tech Hadron carrier board (no hdmi ports or display ports) that I want to setup VNC with. It’s flashed with a fresh jetpack 6.2.1 (with the connecttech BSP). I’ve got VNC working, but its max resolution is 1280x720. I’d like to set it to 2560x1440, but nothing I do seems to make a difference. The typical error is:

$ xrandr --display :0 --fb 2560x1440
xrandr: Failed to get size of gamma for output default
xrandr: screen cannot be larger than 1280x720 (desired size 2560x1440)

I’ve performed the setup What is the best way to control the jetson Orin GUI remotely? - #5 by KevinFFF and I can’t even get the 1920x1080 it has there. (I’ve tried variations of putting it in /usr/share/X11/xorg.conf.d/xorg.conf, /etc/X11/xorg.conf, /etc/X11/xorg.conf.d/10-headless.conf and none of those make a difference.

Any other ideas of what to try? I’m not sure if it’s specifically because of the Hadron carrier. I have an Orin AGX dev kit I guess I could try, but that may be too big of a change for meaningful troubleshooting.

$ xrandr --display :0
xrandr: Failed to get size of gamma for output default
Screen 0: minimum 320 x 240, current 1280 x 720, maximum 1280 x 720
default connected primary 1280x720+0+0 0mm x 0mm
   1280x720      60.00*
   1024x576      60.00
   960x540       60.00
   800x600       60.00    56.00
   640x480       60.00
   684x384       60.00
   640x360       60.00
   512x384       60.00
   512x288       60.00
   480x270       60.00
   400x300       60.00    56.00
   320x240       60.00

*** Please note that this reply is generated by LLM automatically ***

Based on the provided context, it appears that you are experiencing issues with setting up a high-resolution remote desktop on your Jetson Orin NX module, which is installed on a Connect Tech Hadron carrier board. The carrier board does not have HDMI or DisplayPort connectors, and you are trying to use VNC to remotely access the GUI.

Firstly, it’s essential to note that the Jetson Orin NX module is a powerful AI computing device, and its display capabilities are primarily designed for HDMI and DisplayPort outputs. Since your carrier board does not have these connectors, you are relying on VNC for remote access.

The issue you are facing is that the maximum resolution you can achieve with VNC is 1280x720, and you want to increase it to 2560x1440. You have tried various methods, including modifying the xorg.conf file, but none of them have worked.

Comparing your situation to the context provided, there are a few potential issues that might be relevant:

  1. HDMI-related issues: Although your carrier board does not have HDMI connectors, some of the context threads mention HDMI-related problems on other Jetson modules (e.g., AGX Orin). It’s possible that the underlying issue is related to the display configuration or the module’s HDMI controller.
  2. Display configuration: The thread “How to configure HDMI, instead of DP, on customized carrier board” on the Jetson AGX Orin might be relevant, as it discusses display configuration. However, since your carrier board does not have HDMI or DisplayPort connectors, this might not be directly applicable.
  3. GPU power state: The thread “GPU is stuck to maximum power state at idle when using multiple monitors” mentions an issue with the GPU power state when using multiple monitors. Although this is not directly related to your issue, it might indicate that there are some underlying power management or GPU configuration issues that could be affecting your display setup.

To troubleshoot your issue, you could try the following:

  1. Verify the display configuration: Ensure that the display configuration on your Jetson Orin NX module is set correctly. You can try using the nvidia-settings command-line tool to configure the display settings.
  2. Check the VNC configuration: Verify that your VNC configuration is set to allow high-resolution displays. You can try increasing the geometry setting in your VNC configuration file to see if it makes a difference.
  3. Test with a different carrier board: If possible, try using a different carrier board, such as the Orin AGX dev kit, to see if the issue is specific to the Hadron carrier board.
  4. Seek support from Connect Tech: As the manufacturer of the Hadron carrier board, Connect Tech might be able to provide specific guidance or support for configuring the display settings on their board.

In summary, while there are some potential issues related to display configuration and HDMI on other Jetson modules, the root cause of your problem is likely specific to your setup and the Hadron carrier board. Further troubleshooting and experimentation are needed to resolve the issue.

*** Please give the thumbs up if you get answers you like. Or provide feedback to help us improve the answer. ***

One restriction is that the modes must be from the EDID reported from the monitor. Custom modes are not supported. Does the monitor’s EDID show the requested resolution and scan rates? If so, then there could still be limitations on resolution and scan rate combinations, but anything (such as an adapter) which fails to pass EDID will cause a failure.

Hi StrikeEagleIII,

I would suggest you also verify if you can use official BSP package to reproduce the similar issue on the devkit.
Even if Orin NX is different from the AGX Orin, it would still be worth for a try.

Could you also share /etc/X11/xorg.conf and /var/log/Xorg.0.log for further check?

I’m not sure I follow–there’s no monitor on a “virtual” desktop I access over VNC

Original:

# Copyright (c) 2011-2013 NVIDIA CORPORATION.  All Rights Reserved.

#
# This is the minimal configuration necessary to use the Tegra driver.
# Please refer to the xorg.conf man page for more configuration
# options provided by the X server, including display-related options
# provided by RandR 1.2 and higher.

# Disable extensions not useful on Tegra.
Section "Module"
    Disable     "dri"
    SubSection  "extmod"
        Option  "omit xfree86-dga"
    EndSubSection
EndSection

Section "Device"
    Identifier  "Tegra0"
    Driver      "nvidia"
# Allow X server to be started even if no display devices are connected.
    Option      "AllowEmptyInitialConfiguration" "true"
EndSection

I tried the default one you provided in the other post:

Section "Device"
    Identifier "DummyDevice"
    Driver "dummy"
    VideoRam 256000
EndSection

Section "Screen"
    Identifier "DummyScreen"
    Device "DummyDevice"
    Monitor "DummyMonitor"
    DefaultDepth 24
    SubSection "Display"
        Depth 24
        Modes "1920x1080_60.0"
    EndSubSection
EndSection

Section "Monitor"
    Identifier "DummyMonitor"
    HorizSync 30-70
    VertRefresh 50-75
    ModeLine "1920x1080" 148.50 1920 2448 2492 2640 1080 1084 1089 1125 +Hsync +Vsync
EndSection

No difference. So I created a new modeline:

$ cvt 1920 1080
# 1920x1080 59.96 Hz (CVT 2.07M9) hsync: 67.16 kHz; pclk: 173.00 MHz
Modeline "1920x1080_60.00"  173.00  1920 2048 2248 2576  1080 1083 1088 1120 -hsync +vsync

and tried that:

Section "Device"
    Identifier "DummyDevice"
    Driver "dummy"
    VideoRam 256000
EndSection

Section "Screen"
    Identifier "DummyScreen"
    Device "DummyDevice"
    Monitor "DummyMonitor"
    DefaultDepth 24
    SubSection "Display"
        Depth 24
        Modes "1920x1080_60.0"
    EndSubSection
EndSection

Section "Monitor"
    Identifier "DummyMonitor"
    HorizSync 30-70
    VertRefresh 50-75
#    ModeLine "1920x1080" 148.50 1920 2448 2492 2640 1080 1084 1089 1125 +Hsync +Vsync
    Modeline "1920x1080_60.00"  173.00  1920 2048 2248 2576  1080 1083 1088 1120 -hsync +vsync
EndSection

Still no difference. The file /var/log/Xorg.0.log appears to be old and doesn’t appear to reflect the changes I made. As a debug step I renamed the file and rebooted and a new version wasn’t created in its place. I’ve uploaded that version, but like I say it doesn’t look like it reflects the changes I made.

Edit: I tried a different NX that was setup the same way and did get an Xorg.0.log file which is now attached (renamed to reflect hadron s/n 2833).

Xorg.0.log (17.4 KB)

Xorg.0.2833.log (26.4 KB)

So it turns out when I originally setup the boards, I created a configuration /usr/share/X11/xorg.conf.d/xorg.confthat contained:

Section "Device"
    Identifier  "Configured Video Device"
    Driver      "dummy"
EndSection

Section "Monitor"
    Identifier  "Configured Monitor"
    HorizSync 31.5-48.5
    VertRefresh 50-70
EndSection

Section "Screen"
    Identifier  "Default Screen"
    Monitor     "Configured Monitor"
    Device      "Configured Video Device"
    DefaultDepth 24
    SubSection "Display"
    Depth 24
    Modes "1920x1080"
    EndSubSection
EndSection

which didn’t work (I’m guessing because it didn’t have the Modeline that defined the 1920x1080 mode.) However in troubleshooting, I never removed that file and I guess it was overriding anything I set in /etc/X11/xorg.conf, making whatever I had in that file not be used. When I got rid of that file, the xorg.confwith in my original reply with the newer modeline works and gives me 90% of my solution. I was trying to set the resolution to 2560x1440, and so replace the modeline in the file with:

Modeline "2560x1440_60.00"  312.25  2560 2752 3024 3488  1440 1443 1448 1493 -hsync +vsync

(which was generated by cvt), however in the Xorg.0.log I see these lines:

[    17.123] (**) DUMMY(0): VideoRAM: 256000 kByte
[    17.123] (--) DUMMY(0): Max Clock: 300000 kHz
[    17.123] (II) DUMMY(0): DummyMonitor: Using hsync range of 30.00-70.00 kHz
[    17.123] (II) DUMMY(0): DummyMonitor: Using vrefresh range of 50.00-75.00 Hz
[    17.123] (II) DUMMY(0): Clock range:  11.00 to 300.00 MHz
[    17.123] (II) DUMMY(0): Not using mode "2560x1440_60.00" (bad mode clock/interlace/doublescan)

which I surmise is because the modeline wants a clock rate of 312.25 MHz, but the dummy driver says it can only support a clock rate of 300 MHz. I tried adding this line

Option "MaxClock" "320000"

in the monitor section, but it didn’t make a difference (it looks like that limit is “probed” from the driver? Is there a way to change that max clock rate?

Thanks!

It might be useful to turn on verbose logging of video mode for this case. Normally a Jetson GPU will demand any mode picked to be from the EDID, and even a virtual desktop would have a virtual EDID (or its equivalent). A verbose log of modes would tell us exactly why given modes are allowed or denied; and if the mode is not part of EDID, then it won’t be shown in the log as a valid mode.

In your Section "Device", add this line into it:
Option "ModeDebug"

Restart, and after the GUI starts, check the log (add a copy to the forum here). Note that the name of the log has a number in it which corresponds to the environment’s $DISPLAY. Most often the log will be “/var/log/Xorg.0.log”, but if you have a second monitor, then it might be “Xorg.1.log”. This environment variable is not guaranteed because it is not ordered; often a virtual desktop will export “DISPLAY” as “:10.0” and the log will be “/var/log/Xorg.10.log”. One of the best ways to find the log(s) in question is to use “ls -ltr /var/log/Xorg.*.log”; the newest file to be touched and written to will show up at the tail of the list of logs, and the time stamp, if you look right after X starts, will be from the time of start. If you also have a virtual desktop, then there might be two logs. Find out which logs are current after a reboot and startup and put in any log which is current.

Do note that this assumes the display is from the local Jetson and not from the remote host PC end; that seems obvious, but if you use something like “ssh -X” or “ssh -Y” to connect, then part of the server is on the Jetson and part is on the host PC and the logs won’t be of much use. A virtual desktop is in theory an actual full desktop on the Jetson itself, and there will be a log on the Jetson regardless of which host connects to the desktop. I just want to make sure that virtual and actual desktops do not get mixed up.

Then we look for the ModePool in the logs. There will be a note of every mode found in EDID (even if it is a virtual EDID), and what the Jetson thinks of that mode, along with a reason of why it is rejected if it is rejected. There will be a list of the final selection of valid modes in the ModePool, and the selected mode noted. There might be some note about the mode you’ve picked in the config logs.

[    25.824] (==) No Layout section.  Using the first Screen section.
[    25.824] (==) No screen section available. Using defaults.

From the Xorg.0.log you shared, it can not find Layout and Screen Section.

Please try using the following configuration to check if it could help for your case.

Section "ServerLayout"
    Identifier     "DefaultLayout"
    Screen 0       "Screen0" 0 0
EndSection

Section "Device"
    Identifier     "Tegra0"
    Driver         "nvidia"
    Option         "AllowEmptyInitialConfiguration" "true"
EndSection

Section "Monitor"
    Identifier     "Monitor0"
    HorizSync       30.0 - 70.0
    VertRefresh     50.0 - 160.0
EndSection

Section "Screen"
    Identifier     "Screen0"
    Device         "Tegra0"
    Monitor        "Monitor0"
    DefaultDepth    24
    SubSection     "Display"
        Depth       24
        Virtual     2560 1440
    EndSubSection
EndSection

I think the original Xorg.0.log I posted was from a stale configuration, for some reason on that device the log file didn’t appear to be updating. Can you take a look at the Xorg.0.2833.log? It’s the Xorg.0.log file from a different device (I added 2833 to the name to disambiguate).

It’s saying the Max Clock of the dummy monitor is 300000 kHz and it looks like the modeline for 2560x1440 wants a clock rate of 312.MHz, so just above that max clock rate. I tried adding an option to my xorg.conf to increase that (see my above post), but that didn’t seem to do anything.

Thanks

Are you able to get a “ModeDebug” log?

Are you able to get a “ModeDebug” log?

Attached. It appears however the dummy monitor doesn’t use that option?

[    19.179] (II) DUMMY(0): Using 33053 scanlines of offscreen memory 
[    19.179] (==) DUMMY(0): Backing store enabled
[    19.179] (==) DUMMY(0): Silken mouse enabled
[    19.180] (WW) DUMMY(0): Option "ModeDebug" is not used
[    19.180] (II) Initializing extension Generic Event Extension
[    19.180] (II) Initializing extension SHAPE
[    19.180] (II) Initializing extension MIT-SHM
[    19.180] (II) Initializing extension XInputExtension

xorg.conf

$ cat xorg.conf
Section "Device"
    Identifier "DummyDevice"
    Driver "dummy"
    VideoRam 256000
    Option "ModeDebug"
EndSection

Section "Screen"
    Identifier "DummyScreen"
    Device "DummyDevice"
    Monitor "DummyMonitor"
    DefaultDepth 24
    SubSection "Display"
        Depth 24
        Modes "1920x1080_60.0"
    EndSubSection
EndSection

Section "Monitor"
    Identifier "DummyMonitor"
    HorizSync 30-70
    VertRefresh 50-75
    Modeline "1920x1080_60.00"  173.00  1920 2048 2248 2576  1080 1083 1088 1120 -hsync +vsync
EndSection

Xorg.0.debug.log (31.1 KB)

An interesting detail. How are you connecting, is this something like “ssh -Y” or “ssh -X” from another Linux host? Is it a virtual desktop, and if so, what video card are you using on the host PC, and which virtual desktop? Which connection you used at the time of looking at logs matters. Answers could change dramatically depending on this. Some of what you see might be from the Jetson, and other behaviors which you do not expect could be from the host PC. I’m not sure how EDID would be used or how it would behave on a virtual desktop at the Jetson side.

For this I was just using the serial console of the Hadron. I’ve also ssh’d in from my Linux machine but specifically didn’t do x forwarding. Either way, same result.
Thanks

I don’t think I know enough about VNC to answer, but this means there are two locations where resolution might change, (A) the virtual desktop the Jetson sees, the server, and (B) the client side on the host PC. I’m assuming that you have no problem with the client side itself which has the actual monitor.

I can’t answer, but it now makes me very curious as to whether the VNC server running on the Jetson pretends to have an EDID. The server might no longer be using the Jetson’s GPU. Something which might provide some data is to run a heavy graphics load on the Jetson and watch the GPU usage with tegrastats. Does the Jetson’s GPU actually get used?

You could install package “mesa-utils” on the Jetson, which provides program “glxgears”. From the serial console watch tegrastats and then start glxgears and manipulate it to try to increase GPU load. Does tegrastats show the GPU is being used? It doesn’t have to be a huge load, although that might be better, but perhaps the GPU is not even taking part in the VNC server.

The log in “ls -ltr /var/log/Xorg.*.log | tail -n 1” of the Jetson itself will name the context, e.g., if the most recent timestamp is for $DISPLAY of “10.0”, then the log will be Xorg.10.log. In serial console you might need to first “export DISPLAY=10.0” if you try to run anything from serial console. A program which you could run this way, and which comes with package mesa-utils, is “glxinfo”. If on the Jetson side’s serial console you’ve exported DISPLAY to the correct context, what do you see from “glxinfo | egrep -i '(nvidia|vendor)'”?

If this is not NVIDIA’s GPU powering this virtual desktop, then we’re trying to answer the wrong question.

Apologies I’ve been out of the office for two weeks. I will give glxgears a try, but I think this is all upstream of vnc. i.e. I’m 99% sure I would see the same behavior if I didn’t start the vnc server and instead tried to use x-forwarding over ssh, but it’s entirely possible that I’m wrong. I don’t really know anything about the nitty-gritty details of EDID and how displays are setup.

When using a local X server the EDID and GPU is for the monitor plugged in to the system. When you go to a remote ssh forwarding, the EDID is from the monitor on the host system where you are actually sitting. So is the GPU.

A CUDA computation has some similarities. Some CUDA can go directly to a GPU, but many people do not realize that the X server is not really a “graphics” server; X11 is a GPU API, and often CUDA goes through the X API to do CUDA computations which are not for graphics. If you run a CUDA computation on the Jetson and view it from the Jetson, then you are guaranteed it is the Jetson’s GPU performing the CUDA computation. If you are on a remote Linux system with X being forwarded to the local desktop PC, then chances are the computation is actually being performed on the desktop PC. With a virtual desktop part of what goes on would be on the Jetson, and part on the local host PC. Most of what you think of as running on the Jetson would remain on the Jetson, and the desktop PC client becomes a separate GUI application which does not interfere with the Jetson GPU. That virtual desktop though could conceivably be using an actual monitor EDID from the Jetson if configured for it, but it might also be using an EDID which is purely virtual (not belonging to a real monitor, but pretending it is).

An interesting case for illustration is if you run OpenGL. The libraries and GPU are all on the Jetson using some specific version, and other versions (if too different) won’t mix well. Since it is all on one system, it should “just work”. When you run OpenGL via ssh forwarding, only some of what you see runs on the Jetson, and much of it runs instead on the desktop PC. The result is that if the desktop PC does not have the OpenGL libraries and software which works with the same release on the Jetson, then OpenGL apps will fail. CUDA might also find part of what it needs on the Jetson, and part of the libraries on the desktop PC; failure for release versions to match would result in a mysterious failure which does not occur when running purely on the Jetson. Virtual desktops get around much of this, but then it needs a client as well, and the client might have hardware acceleration.