HDMI 4kp60 output without X11


i want to reduce the booting time of the TX2 and tried to disable the X11 GUI. I tried then to use gstreamer with the nvhdmioverlaysink or other nv* sink, but they do not start.

  1. Can anyone reply if there is a way to use HDMI 4kp60 out without X11?

  2. Does anyone knows how nvidia reduces the booting time on the PX2 car platform?

  • On PX2 i heard they have a bootime time of 3 seconds cause the requirements for car systems.

Thanks a lot.

You probably can’t get away without using X since the driver is bound to its ABI. However, most people misunderstand when they see graphics and think much of what is provided by the window manager is actually X11…it may be that what you really want to do is keep X11, but remove the window manager and perhaps display manager.

Much of the software you are using requires being bound to a DISPLAY environment variable, which implies X11 protocol is mandatory for that software even if you never want to display it.

Display managers provide the login services, window managers provide everything you see on the desktop, e.g., buttons, menus, and background colors. When X runs without those it just has an ugly cursor and a gray background…a single application could be started this way, but not multiple applications. The usual situation is X runs the display manager, and when display manager accepts authentication X restarts sudo with that user’s permissions…X then runs the window manager, and the window manager runs the applications. Cut out the window manager and X runs only your application.

Are you trying to display just your application in a dedicated terminal?

Separately, when you boot, where does it spend its time before it even gets to start X?

Complex embedded systems will sometimes pre-boot – get to a point where all the memory is set up the way it will be during any boot operation – and dump to storage; the next time they boot, they simply re-load RAM from disk and run an abbreviated hardware initialization routine. This is similar to suspend/resume in laptop PCs.

I really like the professional tone how topics were discussed here in the forum. I thank you all for this.

@linuxdev: Yes i can live with X11 and i understand now that we need it. Currently i run an webservice with remote interface where i can control most of the video components from the web-interface. I will learn more to modify window manager so that a normal user can´t destroy my setup. Thanks for your feedbacks.

@Snarky: When we re-load from RAM, that means we have to power all the time the device, or?
Does this means that all car systems which run Nvidia PX2 requires a x amount of battery power 247/ ?
I really would like to know how much power they need when they are in “sleep” mode.

I measure the time and my setup systems boots in 20-30 seconds i want to reduce it unter 10 seconds. Will play more with it and report my results.

I don’t know if TX2 supports hibernation.
If it does, then no, you don’t re-load FROM RAM; you re-load a RAM dump from disk.
Because it’s persistent on disk, it uses zero power.

To try it out, you could make sure you have a swap file that’s at least as large as RAM, and then try with “sudo pm-hibernate” on the command line.
My guess is NVIDIA has not done the work necessary in the custom drivers to support proper hibernation, but I could be wrong!

Hi garybb,

Actually, you can try nvoverlaysink which does not rely on window manager.

#Simple usecase
gst-launch-1.0 nvcamerasrc ! nvoverlaysink overlay-x=400 overlay-y=600 overlay-w=640 overlay-h=480 overlay=1

will try that Wayne and report.

@snarky: i ordered a SSD drive and will compare boot-ram with an stored RAM on SSD. I didnt find any infos about tx1/tx2 hibernation. Will also study this to learn more about it.


You might also look at the dmesg output and notice the time stamp for each log line. This might be a good way to find things which are taking time which you don’t need, or at least to determine where boot time is most noticeable.

So, there’s no way to sink to HDMI directly on Jetson and bypass X11 ? I was hoping to not run X11 to save memory, let me know if it can be done.


Keep in mind that X11 is what talks to the GPU driver. Many people mistake the memory used for a desktop login as somehow differing from what the GPU uses for CUDA. If you break the X11 API you break CUDA because the video driver has nothing to talk to it. Even though the buffer might be named via the DISPLAY environment variable the driver doesn’t care whether an end monitor attaches to it or not.

You can remove all end user login and desktop applications and still run X for no purpose other than allowing CUDA (this would be much faster boot than also running a bunch of large programs). Almost nothing you see from a login is actually from X…it is the display manager or window manager application you are looking at. X by itself has almost no functionality…it is a buffer with organized (X11 protocol) access and nothing more. Maybe one day the newer Vulkan will have a server which can run without X…but it’d still be the same story…the window manager would be what the end user sees…Vulkan itself would be just a buffer attached to the GPU (though a more efficient buffer than X11 when rendering locally).

A virtual desktop with no window manager or login manager software is what you want…it could be started directly from the command line with some variation of the “startx” command, and then you could remove lightdm and unity…viewing this with a monitor would be nothing but an ugly gray background and a large ugly mouse cursor (or no mouse cursor). Everything else is a product of the window manager and is not needed for pure CUDA. You would have a DISPLAY environment variable associated with the buffer…CUDA would be satisfied.

As an example of part of this being removed just log in to a console (e.g., CTRL-ALT-F2) and run (as user ubuntu) “startx”. The defaults will give you a minimal desktop and not run much of the software. If on another login you can run “killall -9 Xorg” you’ll end that session (you own that session, so you can kill it…you’ll get an operation not permitted which applies to the system’s X login). It’ll drop back to a console.

Note that “/usr/bin/startx” is a human readable script. The man page for xinit will give you information about what startx uses under the covers.

Next, in a console, run “export DISPLAY=:1”. Then, from a remote ssh login, run this while logged in to the same user (assuming “ubuntu”, but “nvidia” would work…login must be root or in group “video” and must be consistently the same user for all uses):

export DISPLAY=:1
X -nolisten tcp :1

The command issued in the ssh prompt should cause the console where you had exported :1 to basically go blank (but in graphical mode…the actual console F1/F2/F3/F4, so on, depends on how many servers are currently running unless your command specifies). Now, in another ssh login:

export DISPLAY=:1

You’ll see the xterm show on that display. No window manager, no login manager, no fancy mouse cursor. It’s purely X, and if you were to run a CUDA program with DISPLAY set to :1 it would use this video buffer. None of the login or other programs people associat with X would be running.

CUDA uses an associated DISPLAY the same as xterm would…only the results are computations which don’t show up in the user’s desktop. Setting “DISPLAY=:1” and starting X, followed by executing a program, is nothing special. It could just as well be xterm. The two programs have different results, but exist due to the driver talking the X protocol. It’s the window manager and display manager which are complicated and CPU/memory hungry.

I think you misunderstood my question. I’m not concerned about boot time, but I would rather prefer if gstreamer can sink to HDMI output of Jetson without using X11 altogether. Are you sure that CUDA needs X11?

root@tx2:/# ldd /usr/lib/aarch64-linux-gnu/tegra/libcuda.so
	linux-vdso.so.1 =>  (0x0000007f8ab76000)
	libm.so.6 => /lib/aarch64-linux-gnu/libm.so.6 (0x0000007f8a0f0000)
	libc.so.6 => /lib/aarch64-linux-gnu/libc.so.6 (0x0000007f89fa9000)
	libdl.so.2 => /lib/aarch64-linux-gnu/libdl.so.2 (0x0000007f89f96000)
	librt.so.1 => /lib/aarch64-linux-gnu/librt.so.1 (0x0000007f89f7e000)
	libpthread.so.0 => /lib/aarch64-linux-gnu/libpthread.so.0 (0x0000007f89f52000)
	libnvrm_gpu.so => /usr/lib/aarch64-linux-gnu/tegra/libnvrm_gpu.so (0x0000007f89f1d000)
	libnvrm.so => /usr/lib/aarch64-linux-gnu/tegra/libnvrm.so (0x0000007f89ee4000)
	libnvidia-fatbinaryloader.so.28.1.0 => /usr/lib/aarch64-linux-gnu/tegra/libnvidia-fatbinaryloader.so.28.1.0 (0x0000007f89e7f000)
	/lib/ld-linux-aarch64.so.1 (0x000000557f7c4000)
	libnvos.so => /usr/lib/aarch64-linux-gnu/tegra/libnvos.so (0x0000007f89e61000)


As an experiment try running a CUDA application without lightdm. I’m not where I can check the exact command right now, but for example log in via ssh with no X forwarding (don’t use “ssh -Y” nor “ssh -X”). Try to run something CUDA…be sure to start with “unset DISPLAY” so you know nothing has set up an X11 context. If you can get it to work then you can do it without X…though I have strong doubts anything which might work like that is using the GPU. tegrastats could verify one way or the other.

Few solution for rendering without X11:

  1. Use nvoverlaysink in gst pipeline. You can see nvoverlaysink does not require “export DISPLAY=:0”.

  2. Use drm-nvdc library to render. Reference: MMAPI sample 08 and below link.

Thanks, Wayne. I’ve just tried it and nvoverlaysink works fine without X11 indeed. It’s good to know, and that’s what we are planning to use.