How to configure display timing via EDID

Hi all,

we are developing a display device with extended functionality and as a result we need a long blanking time. I am trying to configure the display timing via EDID, so that I don’t need to create a user-defined timing scheme on each new source of display data. Therefore I activated the preferred timing mode and configured a detailed timing scheme within the first data block. During my tests I have also removed all established and standard timings (in case that they have a higher priority than the preferred mode), but the ION graphics card seems to completely ignore the given timing scheme.

So my question is: Is it possible to set a detailed timing scheme via EDID and how do I do that?

Thanks and regards,


I try to stay away from manual timing setup…I’m thankful that VGA connectors are going away! :)

What you are requesting is the arcane knowledge of the way monitor timings used to be transformed into modelines during the days of pure VGA…manufacturers would provide modeline information and specifications for people to manually put into configuration files. The difference now is that much of this information comes automatically from the DDC channel/EDID information on the monitor itself instead of looking it up in a database.

Tools from the “read-edid” and “edid-decode” packages combine to extract and view what a monitor can tell us via reading the EDID and essentially do a human readable version of what Xorg servers are using. Within the parse-edid output modelines are produced…assuming you have the packages installed run one of these:

get-edid | parse-edid
get-edid | edid-decode

I suspect that if you look at the source code of parse-edid (which outputs standard modelines based on monitor EDID info for direct inclusion in “/etc/X11/xorg.conf”) you’ll see a standardized and common way to convert raw EDID data into modelines. Descriptions of modelines can be found here and might be enough without even looking at parse-edid:

Hi linuxdev,

do I understand it right, that there is no automatism for creating the desired timing scheme in DVI/HDMI Mode? So I will always have to create a manual timing?

Thanks and regards

If you are using pure VGA you would need to do this manually (pure VGA has no DDC channel and thus no EDID or automation). Your original post leads me to view this as having a monitor which does provide EDID, but you want the timing to be different than what is automatically provided based on that EDID…the extra blanking time (which translates into a modeline). As long as you cannnot change the monitor’s actual EDID data output and you are not satisfied with the way timing is set up on the “standardized” use of the EDID then manual methods will be required. I suppose you could go into the actual X11 server and find out how it uses EDID and modify that, but you’ll have a non-standard server which is a pain to maintain when something else updates.

Consider that the command “get-edid | parse-edid” is what the system is doing anyway, but instead is doing it externally to the X11 server and useful for debugging. You can essentially change what data is going to “get-edid”, or you can change how X11’s analogue to “parse-edid” uses the information (how it decides modeline values).

I can think of a workaround…an app like awk (gawk) can easily parse out the default content of /etc/X11/xorg.conf (sym link to xorg.conf.jetson-tk1) and splice in the “Monitor” section of parse-edid output…if you understand the modeline output of parse-edid and wish to alter it, this too is easily done from awk. Then it could be added to rc.local or some other point in init.

Do you have the ability to alter the EDID data sent from the monitor?

I can alter the EDID. And this is exactly what I want to do. I want to create an EDID that contains a timing that fits our needs and is used by the OS and graphics card to drive our monitor. But the EDID mainly contains timing schemes defined by VESA that have less blanking than we need. Because of this I defined a detailed timing in the first data block of the EDID that is referred to be the best way of providing a preferred timing. Additionally I defined the same timing in the CEA block. But neither the given resolution nor the blanking is used by the OS and graphics card. The fundamental questions are: Can I define a detailed timing that is transmitted via EDID and used by the graphics card/OS automatically? How do graphics card and OS process the EDID and which information do they use?

One observation I’ve made is that when the get-edid|parse-edid application is unable to parse due to some format problem, the X11 server is also unable to work with the EDID data. The “Monitor” section output of parse-edid seems to contain exactly the same timings as when X11 uses EDID dynamically…so apparently they use the same formulas and formats. If you can reverse this and figure out what modeline you want…and convert to the EDID data which would produce this…then you should succeed. Everything turns into a modeline which is what X11 uses…regardless of whether the data comes from actual EDID or something someone edited. I’d really suggest manually entering a “Monitor” section in your xorg.conf until it does what you want…and then try to get the same output automatically via the get-edid|parse-edid combination.

Ok, the Linux test environment is now producing a timing that is close to what I am expecting. Now I have to solve this problem for Windows systems. Thanks again for your help.

Assuming Windows uses EDID data the result should be very close to the same…correct set up of EDID implies Windows should also find the correct modes and timings (telling it to use that mode or timing might differ). Any failure would likely be parsing of the EDID data in a format that Windows doesn’t understand.