I’m using the TX1 dev kit as a processing unit for a camera system. The output device (connected via HDMI) is a hardware recorder unit, and it is very picky about what refresh rates it will accept. It will only work with exact standards, so for the “60fps” I need that means either 59.94Hz or 60.00Hz.
Running xrandr shows that the refresh rates available for 3840x2160 are 60.02Hz or 59.98Hz. Neither of these are accepted by the recorder (it says “No Input”). The 30.00Hz option does work at that resolution, and the 60.00Hz option for 1920x1080 also works fine.
I’ve run cvt and it does look like it would be possible to create a custom mode for X11 that comes out to exactly 60.00Hz:
“3840x2160_60.00” 713.00 3840 4160 4576 5312 2160 2163 2168 2237 -HSync +VSync
I can add that mode with “xrandr --newmode …” but then I get an error when trying to assign that new mode to the display:
xrandr --addmode HDMI-0 "3840x2160_60.00"
X Error of failed request: BadMatch (invalid parameter attributes)
Major opcode of failed request: 140 (RANDR)
Minor opcode of failed request: 18 (RRAddOutputMode)
Serial number of failed request: 20
I have also tried setting up the mode with xorg.config by adding it as a Modeline in the Monitor section, but that doesn’t appear to do anything. (I’ve tried many different configurations over the last few days, with ignoring EDID, ignoring EDID frequencies, custom EDID, ExactModeTimingsDVI, direct monitor assignment, specifying HorizSync/VertRefresh, and many combinations of those).
Can anyone help me add a custom mode to X11 so I can get the 4K UHD output actually running at exactly 60.00Hz?
I forgot to add that detail (been at this since Tuesday, so I’ve tried a lot of stuff). I tried both the cvt modeline (in my first post) and the gtf one that matches the calculator you used. I also tried both for 59.94Hz. All four have been tried in xorg.conf, and manually with xrandr. Here they all are:
The xorg log shows that my xorg.conf is loading, and doesn’t have any fatal errors. It appears to select all the sections and options I specify, except when it comes to assigning the modes, where it does this:
[ 24.163] (WW) NVIDIA(0): No valid modes for "DFP-0:3840x2160_60.00cvt"; removing.
[ 24.164] (WW) NVIDIA(0): No valid modes for "DFP-0:3840x2160_59.94cvt"; removing.
[ 24.164] (WW) NVIDIA(0): No valid modes for "DFP-0:3840x2160_60.00gtf"; removing.
[ 24.164] (WW) NVIDIA(0): No valid modes for "DFP-0:3840x2160_59.94gtf"; removing.
[ 24.164] (WW) NVIDIA(0):
[ 24.164] (WW) NVIDIA(0): Unable to validate any modes; falling back to the default mode
[ 24.164] (WW) NVIDIA(0): "nvidia-auto-select".
[ 24.164] (WW) NVIDIA(0):
[ 24.164] (II) NVIDIA(0): Validated MetaModes:
[ 24.164] (II) NVIDIA(0): "DFP-0:nvidia-auto-select"
I think that EDID works with the modes you mentioned, you might double check here (just paste in the EDID data): http://www.edidreader.com
The modes you reported as working are native, there may be a video or Xorg bug not working with the extension modes from EDID (non-native modes). Either this is the case or the driver itself does not allow those modes (e.g., being beyond the capability of the driver).
I had thought perhaps you could ignore the EDID, but your xorg.conf already does that. Does anything change when you comment these out:
(WW) NVIDIA(GPU-0): Mode is rejected: Only modes from the NVIDIA X driver's
(WW) NVIDIA(GPU-0): predefined list and modes from the EDID are allowed
(WW) NVIDIA(GPU-0): Mode "3840x2160_60" is invalid.
I think we are on to something which has had an effect on many users on all of the Jetson platforms. Can someone from NVIDIA find out what modes the graphics driver does predefine? Also, is there some hardware limitation which prevents anything but predefined modes from being used? Knowing this could lead to a breakthrough on why many valid EDIDs are not working.
Btw, when all modes are rejected the proper thing to do would be to fall back to 640x480@60Hz. Other modes should not be used for fallback unless there is a reason. Ever since the days of old VGA 640x480@60Hz has bee defined as one of the standard modes, and when changing drivers (e.g., upgrade), this mode was considered the “safe” mode to manually switch to prior to changing drivers. I don’t know if there is actually some standard for this, but this has been at minimum the de facto mode to expect to work in case all automatic configuration fails. I will suggest that the video driver in L4T (all of them…TK1 through TX2) make 640x480@60Hz the fallback…this would leave a usable display for debugging in cases where EDID does not succeed.
I only included the custom ones I added with modelines. There are a lot more in the debug that come from the driver and the EDID that end up passing validation. If there really was nothing to use then 640x480 would be a safe fallback, but in this case it is finding other modes, and using the preferred one from the EDID.
It’s also been a pain trying to find documentation on what options will work in xorg.conf. Most of what I find is for the XFree86 version, and most of the options listed, specifically the ModeValidation ones that appear to do exactly what I need, don’t actually work in the Nvidia X version (x log notes that they are not valid and being ignored).
Ok… bumping this again looking for some more help.
The debug says that only modes from the EDID or the NVIDIA X driver are allowed. I attempted to use a custom EDID file (started with a copy of the one from my device, then altered it to include the corrected refresh rate), but X seems to be ignoring the CustomEDID option.
My next idea would be to alter what’s baked into the NVIDIA X driver. Linux_for_Tegra_64_tx1/sources/kernel_source/drivers/video/tegra/dc/modedb.c appears to have video modes defined. Does anyone know if that’s what will get used by X11?
Before the 4.x kernel series the mode setting was through X and the video driver. This implied that console modes and GUI modes had separate mode setting software. In the 4.x kernel series the separate mode setting in X was removed, and all mode setting was put into the kernel. The kernel still must talk to the NVIDIA video driver…what changes is the path of software leading up to asking the video driver to load a different mode. Restrictions on what is asked of the NVIDIA video driver could be modified in the kernel, but what the NVIDIA video driver allows could not be changed there.
There is a need for NVIDIA to publish what modes which are available in the TX2 (and K1 and TX1) video drivers. If there is a hardware restriction to the extension modes listed in this thread which had valid EDID it would be good to know…if it is a software restriction, then it would be good to see if this can be adjusted for valid EDID modes which are currently being rejected.
Also forgot to add… before anyone notes that the modes I was trying before have too high a pixel clock for the TX1, here’s the updated modeline that fits within the limitations:
Turns out the modes available can be edited in the kernel source, then compiled.
[JetPack Install Dir]/64_TX1/Linux_for_Tegra_64_tx1/sources/kernel_source/drivers/video/modedb.c contains the definitions of the modes used by the framebuffer and X11 in const struct fb_videomode cea_modes. In my specific example:
This defines the 4K UHD @ 60fps mode. While modeline calculators and EDID editors alter the pixel clock in terms of MHz, the way they are defined in modedb.c is as a picosecond divisor. As seen above, the pixclock is set to 1683. 1000000/pixclock = the value in MHz for a modeline (e.g. 1683 → 1000000/1683 → 594.18).
The main problem is that this way of defining the pixel clock has a different granularity than the modelines and EDID definitions. While this is one is the closest match for the desired MHz, it creates a rounding error that leads to the 60.02Hz refresh rate (that then gets rejected by devices that have strict allowances).
So, the solution is:
Use a tool that creates modes based off of the modedb.c pixclock format, rather than a "close enough" calculation based on a MHz format.
Alter [JetPack Install Dir]/64_TX1/Linux_for_Tegra_64_tx1/sources/kernel_source/drivers/video/modedb.c -> const struct fb_videomode cea_modes to use the new mode.
Compile the kernel.
Either move the required files (zImage, Image, device tree binaries, modules) to the TX1 and restart, or to the JetPack locaions and flash the TX1.
Configure X11 (/etc/X11/xorg.conf) to use the altered mode.
Perhaps a patch is needed for cases of the MHz format needing to find the closest pixclock instead of relying on an exact answer (maybe a case of floating point values being close but not exact causing the mode to be rejected)…many video issues would simply go away.
There are several things that I don’t understand/can’t get to work:
What tool can be used to make the conversion to put mode into cea_modes? I have no idea how to calculate the left margin values etc.
In R28.1 modedb.c, there are four hdmi_ext_modes which resemble what xrandr outputs when plugged to a UHD monitor (23.98Hz, 24Hz, 25Hz, 30Hz modes only for 3840x2160). The pixel clock seems way too low at 3367 (=297Mhz, insufficient for 2160p60). There seems to be nothing about cea_modes (one of which is described as “3840x2160 60Hz”). This mode has a 1683 pixclock which translates to ~594.18MHz with your formula. 594MHz seems a lot better to output 2160p60 than 297MHz. I tried replacing the 30Hz in hdmi_ext_modes (which is the default) with the 60Hz from cea_modes, but it doesn’t change anything (clock is still reported at 297Mhz in boot log and refresh rate is still 30Hz).
Calculations from gtf seem wrong to me (as I’ve never seen a 712MHz pix clock, it seems way too high). I also don’t know what all the others number mean (other than 3840 and 2160).
EDIT: in my opinion, a better option would be to convert the “cea_modes” 3840x2160 @ 60Hz entry to a ModeLine that checks out. It avoids recompiling the kernel and modifying driver constants.
There were some modes which need a patch, I don’t know which those are (NVIDIA would need to comment…it is a recent topic and apparently the patch is needed even in R28.1). This may be one of those modes.
The basic idea is to replace the clock with a static pixel clock higher than the normal one using 3840x2160 @ 30Hz (artificially increasing frame rate).
It’s done in /display/drivers/video/tegra/dc/hdmi2.0.c by replacing the
return pclk;
by
return 593583416;
WayneWWW initially suggested using 533250000 which resulted (as reported by the monitor) in a 56Hz refresh rate.
I took the 3840x2160 @ 60Hz entry in “cea_modes” in modedb.c, and calculated the clock with your formula. It gave me a 594177000 clock -rounded to the KHz-, which didn’t work - monitor black).
I made a few other attempts including multiplying it by 1000 and dividing by 1001 which gave me the value 593583416. Now it works on my Dell UP3216Q which reports 60Hz.
The Jetson still thinks it’s outputting 30Hz. But the mouse is definitely more fluid.