3840x2160 display resolution on jetson tx1?

Hello all,

I installed L4T 23.1 on my NVIDIA shield android tv but

I can’t use the 3840x2160 display resolution (yes, I installed the user space graphic components).

My question:

Has someone succesfully use the 3840x2160-resolution on his tx1 board?

Thanks in advance

Yep, 2160p60.
How did you install L4T on your Shield TV? I would like to also :)

Hello dusty_nv,

Thank you, but I can’t get 4k-resolution with my setup. It was no problem for me with jetson tk1 with
my monitor.

But you will know how I installed L4T.

My steps are:

I read this article:

(1) http://forum.xda-developers.com/shield-tv/general/ubuntu-utopic-nvidia-shield-tv-t3150352

I downloaded boot and rootfs from the link in this article, and downloaded it to my box.
You can read there, how to install it. Thanks to the author.

That was my first step to have a linux on my nvidia shield tv. Booting from recovery
with rootfs on sd-card.

It was running but with a few problems. No sound, no graphics acceleration and no
nice desktop. Wifi was no problem for me, because I use a wired connection.
It was no problem to use other rootfs with this setup. My last succees was the opensuse
aarch64-build. But it looks ugly.

Then I was happy about the new L4T Release for the tx1 board. I think you know the steps
howto build the rootfs.

If not, you find it e.g. here:

(2) http://developer.download.nvidia.com/embedded/L4T/r23_Release_v1.0/l4t_quick_start_guide.txt

I transfered this rootfs to an sd-card. If you like to know how, please ask.

With the kernel from the link from the xda-developer-forum (1), all runs nicely, except 4k-resolution
and sound. My last step was to use the official-tx1-kernel described in (2).

You find an explantation how to build a boot Image for shield console in the links from (1) in the downloads.

(3) http://goo.gl/sl6rMu

There you find, how to build the boot-image (look inside (3) nvidia_initrd.gz, there Makefile).
I changed the image inside with the image you find in (2). It was a little bit tricky,
but if you are interested, I can tell you, how build the new boot.img with the official nividia-tx1-Image.
(main task: in bootimg.cfg you have to change te bootsize to 0x179f000)

So I think, I have a full L4T-version on my shield console.

This one one is running nicely on my shield console, but the highest resolution I can choose was 1920x1080.
So there must be something wrong, if you have no problem to get 4K-resolution.

Maybe someone can help me, how to get the 4k-resolution, because this was my main goal. :-(



Well, L4T on Shield is technically unsupported (very interesting procedure though), so there is no guarantee of 4K, but maybe someone in the community can help. We have tested with L4T on Jetson TX1, however.

Hi there

We are having similar problems when connecting a UHD TV (Samsung UE40HU6900S) to the HDMI Out of the Jetson TX1.

At the first boot, the monitor was working (at a low resolution). But now at every boot the kernel booting process halts after trying to set a very high pixel clock (e.g. 594 MHz). The boot process does not get finished and we can not interact with the device at this point.
When the TV is not plugged in, the kernel boots without problems.

Error messages when kernel stops (with TV plugged in):

[    5.536821] tegradc tegradc.1: nominal-pclk:594000000 parent:594000000 div:1.0 pclk:594000000 588060000~647460000
[    5.920135] tegradc tegradc.1: probed
[    5.922938] tegradc tegradc.1: nominal-pclk:594177000 parent:594000000 div:1.0 pclk:594000000 588235230~647652930
[    6.014720] Console: switching to colour frame buffer device 480x135
[    6.122410] tegradc tegradc.1: fb registered

We had similar problems with the Jetson TK1 with L4T R21.3 in May (see https://devtalk.nvidia.com/default/topic/830170/tk1-kernel-stops-booting-if-uhd-monitor-is-conected-to-hdmi/)

What seems to be the problem is the implementation of the framebuffer function tegra_fb_update_monspecs(…) in drivers/video/tegra/fb.c

In there the different video modes that are supported by the monitor get are passed to the frame buffer. Unfortunately the implementation that is used in L4T R23.1 fails to choose a working video mode.
We were able to fix the problem by using the following implementation of the tegra_fb_update_monspecs(…) function in drivers/video/tegra/fb.c and recompiling the L4T R23.1 kernel.

void tegra_fb_update_monspecs(struct tegra_fb_info *fb_info,
			      struct fb_monspecs *specs,
			      bool (*mode_filter)(const struct tegra_dc *dc,
						  struct fb_videomode *mode))
	struct fb_event event;
	int i;
	int blank = FB_BLANK_UNBLANK;
	struct fb_videomode *bestMode;
	struct tegra_dc_mode dcmode;



	event.info = fb_info->info;
	event.data = ␣

	/* Notify layers above fb.c that the hardware is unavailable */
	fb_info->info->state = FBINFO_STATE_SUSPENDED;

	if (specs == NULL) {
		memset(&fb_info->info->monspecs, 0x0,

		 * reset video mode properties to prevent garbage being
		 * displayed on 'mode' device.
		fb_info->info->mode = (struct fb_videomode*) NULL;
		fb_add_videomode(&tegra_dc_vga_mode, &fb_info->info->modelist);
		fb_videomode_to_var(&fb_info->info->var, &tegra_dc_vga_mode);
		fb_notifier_call_chain(FB_EVENT_BLANK, &event);

		/* For L4T - After the next hotplug, framebuffer console will
		 * use the old variable screeninfo by default, only video-mode
		 * settings will be overwritten as per monitor connected.
		memset(&fb_info->info->var, 0x0, sizeof(fb_info->info->var));


	memcpy(&fb_info->info->monspecs, specs,
	fb_info->info->mode = specs->modedb;

	for (i = 0; i < specs->modedb_len; i++) {
		if (mode_filter) {
			if (mode_filter(fb_info->win.dc, &specs->modedb[i]))
		} else {

	/* Restoring to state running. */
	fb_info->info->state =  FBINFO_STATE_RUNNING;
	tegra_dc_set_fb_mode(fb_info->win.dc, specs->modedb, false);
	fb_videomode_to_var(&fb_info->info->var, &specs->modedb[0]);
	fb_notifier_call_chain(FB_EVENT_MODE_CHANGE_ALL, &event);
	fb_notifier_call_chain(FB_EVENT_NEW_MODELIST, &event);
	fb_notifier_call_chain(FB_EVENT_BLANK, &event);
	// WORKAROUND: If pixclock of specs->modedb[0] is not supported, the kernel stops. 
	// Therefore, we look here for the best mode (same size, highest supported pixclock)
	fb_videomode_to_var(&fb_info->info->var, &specs->modedb[0]);
	bestMode = (struct fb_videomode *)(fb_find_best_mode(&fb_info->info->var, &fb_info->info->modelist));
	if(bestMode == NULL)
		// Error, no matching mode found
 	fb_notifier_call_chain(FB_EVENT_NEW_MODELIST, &event);
	dcmode.pclk          = bestMode->pixclock;
 	dcmode.pclk          = PICOS2KHZ(dcmode.pclk);
 	dcmode.pclk         *= 1000;
 	printk(">>> DBG: pclk=%d\n",dcmode.pclk);
 	dcmode.h_ref_to_sync = 1;
 	dcmode.v_ref_to_sync = 1;
	dcmode.h_sync_width  = bestMode->hsync_len;
	dcmode.v_sync_width  = bestMode->vsync_len;
	dcmode.h_back_porch  = bestMode->left_margin;
	dcmode.v_back_porch  = bestMode->upper_margin;
	dcmode.h_active      = bestMode->xres;
	dcmode.v_active      = bestMode->yres;
	dcmode.h_front_porch = bestMode->right_margin;
	dcmode.v_front_porch = bestMode->lower_margin; 	
	tegra_dc_set_fb_mode(fb_info->win.dc, specs->modedb, false);
 	fb_videomode_to_var(&fb_info->info->var, bestMode);
 	fb_notifier_call_chain(FB_EVENT_MODE_CHANGE_ALL, &event);
	fb_notifier_call_chain(FB_EVENT_BLANK, &event);
	fb_notifier_call_chain(FB_EVENT_NEW_MODELIST, &event);

The implementation is not perfect though. We still need to connect via UART, set the video mode manually with xrandr, since the default (4096x2160 30.0) does not work. And then re-plug the HDMI to the monitor.

export DISPLAY=:0
xrandr --output HDMI-0 --mode 3840x2160 --rate 30.0

Hope this helps,
Regards Tobias

Maybe someone is interested (I know this is a forum for t[kx]1-developer-boards and not the right place)

I can get a resolution 3840x2160 with 30Hz, if I use an opensuse rootfs.

You can find it here:


It’s really buggy but it confirms that the shield is able to use this high resolution.
So it looks like it is a problem with the installed ubuntu.

Hi Kamm, have you tested your Samsung display with an x86 Linux system with an NVIDIA GPU? I’d be interested to know if the issue is ARM-specific.

I am trying to implement this patch on 3.10.67 and it is failing due to a incorrect symbol i believe.

Whatever this symbol is it is throwing the error…

event.data = &blank;

From wikipedia: https://en.wiktionary.org/wiki/␣ looks like a space or nothing should suffice…

I don’t know what that file patch is, but in the original file that line is “event.data = ␣”.

EDIT: Spam filters fixed, “␣” can now be seen :)

Ahh! The forum spam filters are preventing this from being posted. The meaning is set event.data equal to reference (ampersand) “blank;”. “blank” itself was declared further up as a local variable.

NOTE: You probably copy and pasted this from the forum, but the text in the original post is being circumvented by spam filters on the forum…this is why the symbol is missing instead of showing what it really should be. Here is a space-separated version to hopefully get past spam filters:

e v e n t . d a t a = & b l a n k ;

Thank you very very much linuxdev!! The reason why I need this implemented is the pixel clock rate setting on tegradc.0 is getting set to the maximum rate of 2160p, 594 MHz and this in turn effects audio initialization and initial lcd/hdtv setup on boot.

The log should read:

tegradc tegradc.0: nominal-pclk:148500000 parent:148500000
div:1.0 pclk:148500000 147015000~161865000

But with the clk getting set to the 1st available rate being 2160p the log reads:

tegradc tegradc.0: nominal-pclk:148500000 parent:594000000
div:4.0 pclk:148500000 147015000~161865000

Okay well this still fails after editing the .c and .h header for fb.c and fb.h. I am really not too sure how to go about getting the correct clock setting at bootup…

error: unused variable 'dcmode' [-Werror=unused-variable]
  struct tegra_dc_mode dcmode;
error: unused variable 'bestMode' [-Werror=unused-variable]
  struct fb_videomode *bestMode;
error: too many arguments to function 'tegra_fb_update_monspecs'
  tegra_fb_update_monspecs(hdmi->dc->fb, NULL, NULL, NULL)
error: too many arguments to function 'tegra_fb_update_monspecs'
  tegra_fb_update_monspecs(hdmi->dc->fb, &specs,

Maybe NVIDIA has implemented this already im not too sure, I could be missing a CONFIG ie CONFIG_ADF_TEGRA

int tegra_adf_process_hotplug_connected(struct tegra_adf_info *adf_info,
		struct fb_monspecs *specs);

This may apply if ADF is involved:

I do not know if there are other dependencies or edits required.

Hello, Linux4all:
can you describe the details about the problem you met?

  1. SDK version you are using.
  2. patch you’ve applied.
  3. full error log you’ve got.



I also had various 4k HDMI resolution problems, but it turned out to be a low quality HDMI cable purchased on ebay. Replacing the cable fixed everything.