TX1 HDMI 2.0 Interface

We have a couple of the Jetson TX1 development boards but have had varying success when using monitors with DVI where using an HDMI to DVI adapter. Some work and some don’t. I have found that a monitor that won’t work will start working after plugging in a monitor that does. I also have the TK1 development platform and have not had the same issues. Is this something related to the HDMI 2.0 interface on the TX1.

Going through the design guide and looking at the TX1 development platform it is suggesting to put AC coupling caps on all the differential pairs as well as HDMI termination and a common mode choke. I understand when going from 1.4 to 2.0 the speed increases from 3.4Gbps to 6Gbps. Are these additions required for the HDMI 2.0 interface? Having done several HDMI 1.4 designs these were never required and are also no on the TK1.

Hi snageli,

I can’t answered you for sure but what your are talking about could be a lead ie AC caps and ESD/EMI protections.

Another should be on how much power the on-cable converter (HDMI to DVI) needs.
Looking on the Jetson TX1 carrier board schematic (through “P2597_B02_Concept_Schematics.pdf”) page 14 you can see that the current on the HDMI interface is limited to 300mA… should be enough (in my memories the standard says 55mA…to be confirmed) be who knows what’s on the converter…
My third thought is kind of the same of yours (which should be the one because of the fact that they work after plugging in a “good” one) :
because the HDMI controller is common for HDMI and DisplayPort you have to enable an hardware level shifter (still page 14 on the schematic) based on Ferrite Bead + Resistor.
Maybe this induces a parasitic remanent current on the bead that is “released” when you plug a good monitor in.

Have you tried the same unworking monitor with other DVI/HDMI adapters ?


One issue I’ve observed on JTX1 (and is being worked on) is that some older monitors are initially having their legacy DDC/EDID response ignored.

My monitor, for example, stays blank at startup until X11 starts; when it goes blank, it says the signal is scanning too fast. Since a monitor starts by sending DDC/EDID using the available modes from older/slower standards, and then transmitting newer/faster EDID extensions for more modern modes (e.g., high-def), and finally for transmitting standards for the newest most (e.g., ultra-high-def), this tends to mean the original/oldest standard is not being read correctly. It scans too fast for my monitor when still in text mode, so it indicates that it is defaulting to one of the high-def standards this monitor does not support. When it gets to the GUI stage, apparently X11’s reading of EDID works (console and X11 stages seem to use different EDID software…which makes sense as X11 is not installed everywhere and console/X11 need to remain independent…console/X11 software for parsing DDC/EDID is separate and independent of each other).

I suspect that if first plugging in a “good” monitor makes it possible for a “bad” monitor to work, that it simply means there was a compatible mode between good and bad monitors, but only the “good” monitor had parsable EDID…and would have left the video in that monitor’s mode, thus when plugging in the “bad” monitor and still in that mode, the bad monitor would work.

Appreciate the feedback on the HDMI/DVI issue. The adapter I am using is just a passive cable adapter but I have several of those so I can try different combinations between the adapters and monitors. At one point I had the serial debug port up when this was happening and I do recall it mentioning some timing errors. I will have to look back at that again, that would coincide with the DDC/EDID timing issue theory.

I am also curious if you have any feedback on my other question as to the validity of the ac coupling caps and HDMI termination that reside on the current TX1 carrier board.

I can’t answer about the coupling caps. I would use whatever the standards want (which is likely why the nVidia docs use this). However, the actual values may need adjustment due to circuit board trace designs and the dielectric of the board material.

It’s very difficult to design because any hardware capable of monitoring what is really going on will itself alter things, and typically a protocol analyzer or other measurement system capable of providing the information you need without causing other issues is extremely expensive. You may be able to at least observe how clean the waveform is on DDC lines and compare between “good” and “bad” monitors and how different decoupling capacitors change this (i.e., the DDC is to see if response is given, further waveform “clean/dirty” on lines with the capacitors just for a ballpark “do the waveforms look right” after DDC responds).

Hi snageli,

These AC caps, pulldowns are must for TX1 which are not for TK1 as the HDMI pads are different. HDMI2.0 is more critical on timing than HDMI1.4, and so the adapter. HDMI2.0 interface on TX1 is verified before shipment, i think you can check the adpter timing or initial sequence firstly.

I’ve had similar experiences, plugging into monitors (DVI, HDMI) that recognize Jetson in order to “bootstrap” onto other monitors, including 4K ones. Older DVI panels are especially problematic. (1280x1024, internal VGA->DVI converter.)

At boot, the HDMI state machine determines whether the attached device is UHD or HD (HDMI 2.0 or not), and then configures the signals and EDID accordingly.

There is a known issue with existing UHD Sharp and LG panels, that can be solved with the “bootstrap” workaround. Expect enhanced HDMI capabilities in future BSP releases. (TX1-based ShieldTV had the benefit of a plugfest, whereas Jetson did not, but do expect Jetson’s compatibility list to increase vs. ShieldTV.)


Which type of displays except computer monitor can be connected to the Jetson TX1?

The interfaces supported are eDP and HDMI (basically anything with EDID/DDC query can adapt to HDMI).

This issue has not been fixed yet. After a new TX1 board was just received, I found a HDMI connection did NOT work. why?

HDMI (and other modern video connectors) have a wire which supports a query to the monitor of its capabilities. Automatic configuration of video mode is via this (in the old days config files would require manual editing or naming a setting from a database of known monitor settings). Should automatic configuration fail there is a fallback default mode. Default sometimes matches what the monitor can do, and sometimes does not.

So problem one would be if EDID query works, but the monitor responds with something the video system can’t understand. There are actually many monitors with these “quirks”, and kernel code (somewhat humorously) reflects this.

Problem two would be if EDID query fails outright. EDID uses i2c protocol, so if anything is not right there, then EDID fails completely. An example would be if a VGA adapter is used since VGA literally cuts the EDID wire.

Problem three is if EDID works, but for some reason all modes of the monitor are not workable due to drivers and kernel issues.

To find out what’s going on you will need either ssh access or serial console access. See:

I understand, the topic was discussed several times, but it was not possible to find a solution to this problem on the forum. In cases when an HDMI cable is connected, the kernel boot process stops at a message

[ 1.939071] tegradc 15210000.nvdisplay: Display dc.15210000 registered with id=0
[ 1.939078] DC OR NODE connected to /host1x/sor1
[ 1.939157] parse_tmds_config: No tmds-config node
[ 1.939163] tegra_camera_platform tegra-camera-platform: tegra_camera_probe:camera_platform_driver probe
[ 1.939234] tegradc 15210000.nvdisplay: DT parsed successfully
[ 1.939273] misc tegra_camera_ctrl: tegra_camera_isomgr_register isp_iso_bw=1250000, vi_iso_bw=1500000, max_bw=1500000
[ 1.946472] tegra_nvdisp_bandwidth_register_max_config: max config iso bw = 16727000 KB/s
[ 1.946474] tegra_nvdisp_bandwidth_register_max_config: max config EMC floor = 665600000 Hz
[ 1.946476] tegra_nvdisp_bandwidth_register_max_config: max config hubclk = 357620000 Hz
[ 1.946505] tegradc 15210000.nvdisplay: vblank syncpt # 7 for dc 1
[ 1.946510] tegradc 15210000.nvdisplay: vpulse3 syncpt # 8 for dc 1
[ 1.948224] tegra-adma 2930000.adma: Tegra ADMA driver register 10 channels
[ 1.949004] tegra-fuse-burn 3820000.efuse:efuse-burn: Fuse burn driver initialized
[ 1.949228] PD DISP0 index2 UP
[ 1.949383] Serial: 8250/16550 driver, 4 ports, IRQ sharing disabled
[ 1.949640] PD DISP1 index3 UP
[ 1.949725] PD DISP2 index4 UP
[ 1.951596] console [ttyS0] disabled
[ 1.963237] V_REF_TO_SYNC >= 1; H_REF_TO_SYNC < 0
[ 1.963241] tegradc 15210000.nvdisplay: Display timing doesn’t meet restrictions.
[ 1.963244] PD DISP2 index4 DOWN
[ 1.963329] PD DISP1 index3 DOWN
[ 1.963403] PD DISP0 index2 DOWN
[ 1.964933] tegradc 15210000.nvdisplay: probed
(kernel load stop)

log load kernel in cases where the HDMI cable is disabled

[ 1.928045] tegradc 15210000.nvdisplay: Display dc.15210000 registered with id=0
[ 1.928052] DC OR NODE connected to /host1x/sor1
[ 1.928094] misc tegra_camera_ctrl: tegra_camera_isomgr_register isp_iso_bw=1250000, vi_iso_bw=1500000, max_bw=1500000
[ 1.928116] parse_tmds_config: No tmds-config node
[ 1.928197] tegradc 15210000.nvdisplay: DT parsed successfully
[ 1.935244] tegra_nvdisp_bandwidth_register_max_config: max config iso bw = 16727000 KB/s
[ 1.935246] tegra_nvdisp_bandwidth_register_max_config: max config EMC floor = 665600000 Hz
[ 1.935248] tegra_nvdisp_bandwidth_register_max_config: max config hubclk = 357620000 Hz
[ 1.935278] tegradc 15210000.nvdisplay: vblank syncpt # 7 for dc 1
[ 1.935282] tegradc 15210000.nvdisplay: vpulse3 syncpt # 8 for dc 1
[ 1.937023] tegra-adma 2930000.adma: Tegra ADMA driver register 10 channels
[ 1.937883] tegra-fuse-burn 3820000.efuse:efuse-burn: Fuse burn driver initialized
[ 1.937993] PD DISP0 index2 UP
[ 1.938255] Serial: 8250/16550 driver, 4 ports, IRQ sharing disabled
[ 1.938405] PD DISP1 index3 UP
[ 1.938487] PD DISP2 index4 UP
[ 1.939227] tegra-i2c 3190000.i2c: no acknowledge from address 0x50
[ 1.940648] console [ttyS0] disabled
[ 1.940678] PD DISP2 index4 DOWN
[ 1.940767] PD DISP1 index3 DOWN
[ 1.940847] PD DISP0 index2 DOWN
[ 1.942364] tegradc 15210000.nvdisplay: probed
[ 2.358906] 3100000.serial: ttyS0 at MMIO 0x3100000 (irq = 36, base_baud = 25500000) is a Tegra
[ 2.358945] nvmap_background_zero_thread: PP zeroing thread starting.
[ 2.361065] Console: switching to colour frame buffer device 80x30
[ 2.389841] tegradc 15210000.nvdisplay: fb registered
[ 2.389841] tegradc 15210000.nvdisplay: fb registered

Q. How can I disable the display check when the kernel is loaded?
In the development of software it is often necessary to reboot TX2 and perform “ritual actions” to disconnect the HDMI cable is very annoying.

I can’t answer your question, but it is a bug for probe of HDMI to cause lock up. HDMI itself is intended to be hotplug, so there should be no need of any kind of “ritual action”. One thing I am curious about is if you can boot without HDMI, and then plug HDMI in after the system is fully booted without lockup? If this succeeds, what is the output of:

sudo cat `find /sys -name 'edid'`

This latter data can show if the monitor is behaving correctly or not. If the monitor is behving correctly, then it is likely a bug in the video driver.

Also, describe any cabling adapters.

A cable HDMI to DVI is used to connect to the monitor HP LP2065, after loading the system I connect the cable HDMI and the monitor almost immediately displays the “working picture”, following the link below the screen shot

I can’t copy and paste the EDID data from an image, but basically it seems EDID is working. So the DVI adapter isn’t doing any harm, and things should work. This seems to be a video driver bug rather than a monitor issue (you’d still need to verify EDID checksum, but if EDID shows up, it’s really rare to have a bad checksum).

Sorry for the screen with edid.

00 ff ff ff ff ff ff 00 22 f0 72 0a 01 01 01 01
 31 10 01 03 80 29 1f 78 ee a6 a5 a3 57 4a 9d 25
 12 50 54 a5 2b 80 31 59 45 59 61 59 81 40 81 80
 81 99 a9 40 01 01 48 3f 40 30 62 b0 32 40 40 c0
 13 00 98 32 11 00 00 1e 00 00 00 fd 00 30 55 1e
 5c 11 00 0a 20 20 20 20 20 20 00 00 00 fc 00 48
 50 20 4c 50 32 30 36 35 0a 20 20 20 00 00 00 ff
 00 43 4e 47 36 34 39 30 33 4d 44 0a 20 20 00 64

I’m sure that the problem with the video driver, connected the TX2 to my system monitor 27ud68 lg (4K) HDMI-HDMI, system loading is in normal mode.
All right, do not use old stuff :)
It is possible to block the connection checking of the monitor when the kernel is booted?

I did view the checksum and the EDID is valid. Several modes should be supported, so EDID should not have failed. You might want to post a copy of the “/var/log/Xorg.0.log” to help, but I’m pretty sure it is a video driver issue.

Link to the log file Xorg.0.log http://fex.net/#!570498474338

I don’t see any particular error, but this is the setting being defaulted to from the log at the end:

(II) NVIDIA(0): Setting mode "HDMI-0: nvidia-auto-select @1600x1200 +0+0 {ViewPortIn=1600x1200, ViewPortOut=1600x1200+0+0}"

Your monitor does support that mode at 60Hz, but the log does not mention the scan rate. If you watch your monitor closely as it boots and the screen fails, does the monitor, even very briefly, show any text which might indicate something like scan rate too fast? Or anything at all about the mode being out of range?

EDIT: I also find that this last line of the mode set seems to be a truncation of what should occur. On my Viewsonic I get two more lines (but this is on a TX2 which might actually differ):

(II) NVIDIA(0): Setting mode "HDMI-0: nvidia-auto-select @1680x1050 +0+0 {ViewPortIn=1680x1050, ViewPortOut=1680x1050+0+0}"
(--) NVIDIA(GPU-0): ViewSonic VX2235wm (DFP-0): connected
(--) NVIDIA(GPU-0): ViewSonic VX2235wm (DFP-0): External TMDS

So if those last two lines on a TX2 are the same as on a TX1, then a mode was set but the monitor connection was never completed.