Jetson TX1 Desktop Sharing Resolution Problem Without Real Monitor

Hi, I can connect remotely to the TX1. But the resolution is too low. When I open any page, I can not see a part of it. Is there a way to solve this?
Thank you…

Hi curiouser,
Could you share how to reproduce this? Which remote client/server do you use?
What you mean is that you don’t have a real monitor on TX1 so you use remote connection, right?

Hi WayneWWW,
I use “Desktop Sharing” in Ubuntu. I connect with “vnc viewer” from main computer. Yes, I do not have a real monitor. So we have to go to such a solution.

After I connect the TX1 to the real monitor with HDMI, I get the resolution I want when I connect remotely in the same way. But without the hdmi monitor the result is like this:
http://imgur.com/6XwLfUT

Thank you…

Hi curiouser,
I think this issue happens because vnc server cannot support xrandr. The current resolution is default 640x480. You could check if latest vnc server support xrandr. I have tested some usecase.

  1. Connected to a HDMI monitor with 1920x1080 when boot up -> vnc client can see 1920x1080 resolution.
  2. Without connecting to any panel or monitor -> vnc client can only see 640x480 as your case.
  3. Based on 1, plug out HDMI monitor -> vnc cllient remains 1920x1080.

Hi WayneWWW,
Yes. It’s just like you said. I tried on a monitor that borrows what I said above. But when I come home and start without a monitor, TX1 remains 640x480.

hi curiouser,
Please check if latest vnc server support xrandr and try to install it on tegra.

Hi WayneWWW,
I used VinoVnc (Desktop Sharing), which comes as default in Ubuntu. How can I understand if this supports XRANDR.

You can take a look at support page of vnc.

https://support.realvnc.com/knowledgebase/article/View/393/5/how-can-i-change-the-geometry-of-a-vnc-server-in-virtual-mode-desktop

I’m not sure, I do not think it supports xrandr. Because there is no resolution setting on vnc-server. Am I right!

Hi curiouser,

You may have seen this post. Please take a look if not.
https://devtalk.nvidia.com/default/topic/984355/?comment=5094856

The standard VNC server vino of Ubuntu 14.04 does not support xrandr to change screen resolution on a headless system. There are several ways how you can cope with that problem. One way is to install another VNC server like vnc4server, which support display resolution configuration with xrandr.

When no screens are found, the system defaults to a “virtual desktop”, with a 640x480 resolution. For most users using one remote machine with one Tegra board it maybe sufficient to only change the default resolution of the virtual monitor of a headless system. For that you simply have to add a section “Screen” to your /etc/X11/xorg.conf and choose a resolution (Virtual 1280 800)

sudo /etc/X11/xorg.conf

# Copyright (c) 2011-2015 NVIDIA CORPORATION.  All Rights Reserved.

#
# This is the minimal configuration necessary to use the Tegra driver.
# Please refer to the xorg.conf man page for more configuration
# options provided by the X server, including display-related options
# provided by RandR 1.2 and higher.

# Disable extensions not useful on Tegra.
Section "Module"
    
Disable     "dri"
    SubSection  "extmod"
        Option  "omit xfree86-dga"
    EndSubSection
EndSection

Section "Device"
    Identifier  "Tegra0"
    Driver      "nvidia"
    Option      "AllowEmptyInitialConfiguration" "true"
EndSection

Section "Monitor"
   Identifier "DSI-0"
   Option    "Ignore"
EndSection

Section "Screen"
   Identifier    "Default Screen"
   Monitor        "Configured Monitor"
   Device        "Default Device"
   SubSection "Display"
       Depth    24
       Virtual 1280 800
   EndSubSection
EndSection

After reboot the board will start the virtual screen with the configured. Note this way you still not able to change to another resolution other than your chosen default resolution.

EDIT: Tried to add the text insets of the xorg.conf to the post.

1 Like

I recently switched from a carrier board having an HDMI display output to one having no display outputs of any kind (Auvidea J90).

I have been using the built-in remote server offered in Ubuntu 16.04 (Jetpack 3.1) with RealVNC Viewer as the client.

Performance was never good, but when I switched to the new carrier board, the UI slowed down by a factor of 10 or more. I had suspected this had to do with X11’s usage of the GPU (and current lack thereof), as compiz now fully loads 2 or more of the CPU cores.

I attempted the recommended edits to the xorg.conf, but received the following error upon reboot:

" Could not apply the stored configuration for monitors
none of the selected modes were compatible with the possible modes:
Trying modes for CRTC 394
CRTC 394: Trying mode 640x480@60Hz with output at 720x576@60Hz
(pass 0)
CRTC 394: Trying mode 640x480@60Hz with output at 720x576@60Hz
(pass 1)
"

It then detects a 40" display, and yields a final resolution of 1280x720, and continues to run very slowly, as though it were rendering on the CPU.

Is it impossible to use the GPU to drive the virtual display used for the remote desktop?

Do you get any errors from “sha1sum -c /etc/nv_tegra_release”? If so, then I would guess the error is on libGLX.so. This is not only the part which enables hardware acceleration, it also allows CUDA access.

In your virtual system I’m not sure how to be sure that the “glxinfo” command applies to the Jetson side, but try to run this command and see if it reports the NVIDIA driver or if it instead reports Mesa. The output will be long, but relevant information will be near the top. This will filter it some for you:

glxinfo | egrep -i '(opengl|mesa)'

The complication is that normally this would show the information for the display on the system being used. This has two displays involved though…one is the virtual display at the Jetson side, the other is the client running on your host…I do not know if glxinfo will show for Jetson or PC…I believe it will show for the Jetson though.

To check with cpu/gpu utility, please check tegrastats

sudo ./tegrastats

linuxdev,

“sha1sum -c /etc/nv_tegra_release”:
Returns no errors.

“glxinfo | egrep -i ‘(opengl|mesa)’”:
Returns: “Error: unable to open display”
[Note: I installed mesa-utils to execute this]

“xdpyinfo”:
Returns: “Error: unable to open display”

“sudo ./tegrastats”:
Sample output:
RAM 867/7854MB (lfb 1552x4MB) cpu [0%@349,off,off,37%@347,1%@348,1%@347] EMC 0%@1600 APE 150 VDE 1203 GR3D 0%@140
RAM 867/7854MB (lfb 1552x4MB) cpu [1%@346,off,off,1%@347,2%@348,1%@348] EMC 0%@1600 APE 150 VDE 1203 GR3D 0%@140
RAM 867/7854MB (lfb 1552x4MB) cpu [0%@345,off,off,0%@348,1%@346,0%@347] EMC 0%@1600 APE 150 VDE 1203 GR3D 0%@140
RAM 867/7854MB (lfb 1552x4MB) cpu [0%@345,off,off,1%@348,2%@348,0%@347] EMC 0%@1600 APE 150 VDE 1203 GR3D 0%@140
RAM 867/7854MB (lfb 1552x4MB) cpu [0%@349,off,off,0%@347,2%@347,0%@348] EMC 0%@1600 APE 150 VDE 1203 GR3D 0%@140
RAM 867/7854MB (lfb 1552x4MB) cpu [8%@345,off,off,1%@347,0%@347,1%@348] EMC 0%@1600 APE 150 VDE 1203 GR3D 0%@140
RAM 867/7854MB (lfb 1552x4MB) cpu [1%@352,off,off,2%@348,2%@348,0%@349] EMC 0%@1600 APE 150 VDE 1203 GR3D 0%@140
RAM 867/7854MB (lfb 1552x4MB) cpu [2%@345,off,off,0%@346,0%@348,0%@348] EMC 0%@1600 APE 150 VDE 1203 GR3D 0%@140
RAM 867/7854MB (lfb 1552x4MB) cpu [2%@345,off,off,7%@348,0%@348,1%@347] EMC 0%@1600 APE 150 VDE 1203 GR3D 0%@140
RAM 867/7854MB (lfb 1552x4MB) cpu [1%@349,off,off,2%@347,0%@348,0%@347] EMC 0%@1600 APE 150 VDE 1203 GR3D 0%@140
RAM 867/7854MB (lfb 1552x4MB) cpu [0%@350,off,off,1%@348,2%@348,0%@348] EMC 0%@1600 APE 150 VDE 1203 GR3D 0%@140
RAM 867/7854MB (lfb 1552x4MB) cpu [5%@347,off,off,2%@348,0%@348,0%@347] EMC 0%@1600 APE 150 VDE 1203 GR3D 0%@140
RAM 867/7854MB (lfb 1552x4MB) cpu [1%@348,off,off,2%@348,0%@348,0%@347] EMC 0%@1600 APE 150 VDE 1203 GR3D 0%@140

Your environment was not linked to a display…virtual or real. Without this there is no possibility of anything driving the GPU (video or CUDA) working. This is the reason glxinfo and xdpyinfo failed. The same will be true for everything else as well.

What shows up from this?

ls /tmp/.X11-unix/*

If X0 shows up, then there is a display which can be bound by “export DISPLAY=:0”. If X1 shows up, then there is a display which can be bound by “export DISPLAY=:1”, and so on. A DISPLAY is really a buffer, and not necessarily a physically connected monitor. If no display is visible, then there is no virtual desktop server running.

Linuxdev,

The command returned X0 and X1 (I assume X1 is the virtual display from the remote desktop?).

Ultimately, I resolved the bulk of my issue by installing the firmware provided by Auvidea for the J90 carrier board (quite the experience for a headless setup). I’ll take a look at the xorg.conf to see if there are any significant differences, but all I know is that a noticeable speedup occurred after applying the patches. The remote performance is now equivalent to what I experienced prior to the carrier board change.

That said, I’m still hunting for a faster desktop-streaming solution, that works out-of-the-box with a JetPack install.

Likely “DISPLAY=:0” is the physical display and “DISPLAY=:1” is the virtual display (set the environment variable like that prior to launching an X-aware application and it should refer to that particular display…glxinfo is one example).

My environment is Jetson nano + official Ubuntu 1804. I use jetson nano as a headless box and remote login by it’s internal vino vnc server. I changed virtal screen size as per einrob’s tricks. The tricks work but when I connect a real hdmi monitor and power cycle the board. I met same issue what evan.censystech met:

" Could not apply the stored configuration for monitors
none of the selected modes were compatible with the possible modes:
Trying modes for CRTC 394
CRTC 394: Trying mode 640x480@60Hz with output at 720x576@60Hz
(pass 0)
CRTC 394: Trying mode 640x480@60Hz with output at 720x576@60Hz
(pass 1)

Does that mean the trick to modify virtual defaltu screen size in xorg.conf file only work without a real screen connected?

Someone else may know, but the mechanism is that EDID data is provided by a monitor to name its resolution and timings. A virtual server probably has to provide some “synthetic” substitute for that. The GPU itself only works with certain predefined resolutions and timings, and if an EDID is missing or not accepted, then timing falls back to some other default. If that default works with your monitor (whether physical or virtual), then you get a display (although it might not be the resolution you expected or wanted). Since I don’t know the mechanism by which resolution and timing of a virtual desktop are configured I don’t know the answer.

FYI, if a hotplug event is detected and the EDID of a monitor is successful, then unplugging this and plugging in another monitor without EDID will likely result in the resolution sticking to what that previous monitor was set up for. Even a virtual display might inherit a successful EDID query of a real monitor, and if the real monitor was never there, then perhaps initial default timing is failing.