PCIe to VGA Converter

Hello,

We are having a custom carrier board which is developed for Jetson TX-2. Our customer requires two VGA or DVI output to display two different video streams on two monitors.

We are having 2 free PCIe interfaces on our custom carrier board to connect PCIe to VGA converter.

Can you suggest some industrial grade PCIe to VGA/DVI converter which is compatible with the Jetson TX-2 platform? We are using kernel 4.9.135.

Looking forward for your expert support.

Thanks and Regards,
Asif Ikbal

There is a wire on HDMI (and digital DVI, but not analogue DVI) with the label “DDC”. This wire is responsible for sending “EDID” data to the computer which tells the computer about the capabilities of the monitor. This is the only supported method of configuring a monitor, so expect VGA to fail, as well as analog DVI. However, digital DVI should work. Any adapter which cuts the DDC wire will also cause failure. I do not have a specific recommendation, but you’ll need some combination of DVI-D and/or HDMI.

Hello linuxdev,

We are considering VL-MPEe-V5 from Versalogic which is a video expansion module with following platform and driver details:

  1. Chipset: Silicon Motion SM750. 2D Graphic Accelerator Videocore with 128-bit 2D graphic engine which supports a single display, two cloned displays, or two simultaneous independent displays

  2. BIOS/On-board firmware : On-board SPI-based video BIOS supports VESA standard graphics modes

  3. Drivers : Compatible with most x86 operating systems including Windows, Windows Embedded, and Linux using standard software drivers.

Since they have written in the datasheet that they are using standard PCIe driver and running on-board firmware on chipset to make it display interface.

Can we use this device for our application and is it possible to port their driver for Jetson TX-2 platform and how much effort is involved?

Thanks and Regards,
Asif Ikbal

It is hard to say if the device will work. This might not be important for your case, but most video cards on a PC are based on PCI. PCI has available methods of query for video function. The GPU on a Jetson is not PCI, and so no PCI function can be used to detect or configure (the Jetson integrated GPU is wired directly to the memory controller). In theory, if this does not require any PCI discovery, it might work. If the card uses PCI discovery, then there is no possibility of it working without recoding the PCI discovery.

Now if your card does not require interaction with the iGPU, then all of the above does not matter. You still have the issue of the arm64/aarch64 architecture, but if the driver content is available for the Jetson’s architecture, then there is a reasonable chance it will work.

Provided that the driver’s source code is available, and provided the driver does not use features found in the desktop PC, but not found in the Jetson’s architecture (think SIMD type extensions), then you could just compile this directly in the Linux kernel and it should “just work” (there are a lot of “if”'s though, so this is far from a guarantee). If the driver source code uses PC architecture special functions though, then the code would have to be modified to either (A) no longer use that function, or (B) try to port it to whatever “similar” function is available in the ARM processor.

Anything other than being able to build the feature without modification is a lot of work. You will find one advantage you have is that you can try to compile any given Linux feature on a kernel without having the actual hardware. For example, you could cross compile from the Linux PC and see if the feature builds for arm6, or natively build the kernel with that feature/driver directly on a Jetson. If the feature can be compiled, then your odds of this working just went up dramatically. Still, not a guarantee until you actually have both the driver and the hardware.

Hello linuxdev,

Thanks for your response.

I am planing to compile the SM750 driver natively on the Jetson. I have found the driver source code on the staging directory in the kernel source. As I know the under development drivers are stay in the staging directory.

What is the best way to compile only SM750 driver as a module natively?

As I said driver is in staging directory, What are the errors would i face during compilation? Is any additional setup required to compile the staging driver?

Looking forward for your expert support.

Thanks and Regards,
Asif Ikbal

Looking at a running Jetson’s kernel config I see that the driver is already part of the source tree, but needs activation:
# CONFIG_FB_SM750 is not set

The best native compile is to add source to the Jetson (make sure you have enough disk space for any temporary output since it is likely there is not enough output space on a TX2, e.g., a thumb drive could be used for output), set the configuration to match the existing Jetson (e.g., copy “/proc/config.gz” to the output location “O=/some/where”, gunzip it, and rename it “.config”), and note the output of “uname -r”. The suffix of “uname -r” is what the “CONFIG_LOCALVERSION” of the “.config” should be set to. If nothing has ever changed, then this would be “-tegra”. Example in “.config” at the “O=/some/where” compile option:
CONFIG_LOCALVERSION="-tegra"

This means at this point the kernel being built is an exact match to what is running. You can then use a config editor (many people use “menuconfig”, I use “nconfig” since it can search for symbols, e.g., you can search for “SM750”), and see if it is available as a module. If so, then use the “m” key to enable this. Then, from the source directory:

make O=/some/where modules_prepare
make O=/some/where modules

There are then options if you want to copy the module to a temp location (I recommend sending to a temp location, followed by manual copy of the single module to the right “/lib/modules/$(uname -r)/kernel” subdirectory.

Notes:

  • If you have not installed the correct “libncurses5-dev” package, then the config editors won’t be available. You would fix this via “sudo apt-get install libncurses5-dev”.
  • I recommend building the full kernel “make -j 6 O=/some/where Image” even if you won’t use it. This also performs the “modules_prepare” step, but more importantly, is something of an acid test for whether things are going to work correctly. This takes time, but if Image compiles correctly once, then you know everything else should already be set up properly. “modules_prepare” should also work without building “Image”, but is not necessarily a thorough QA test.
  • If CONFIG_FB_SM750 cannot be set as a module, meaning the “m” key won’t enable this, and only the “y” key can be used, then you must compile Image. You’d want to use a new file name to install this, and leave the original file name intact. Then add a new entry to “/boot/extlinux/extlinux.conf” to point at the new kernel (if it works, then you could remove the original entry and kernel, but it is very valuable to have a backup kernel). Note that if you must build Image for install, that you also need all modules built and installed. You’d want to alter the “CONFIG_LOCALVERSION” to something other than “-tegra” so that the module directory is new and not mixed with the original, e.g., something like:
    CONFIG_LOCALVERSION="-sm750"

Hopefully this can be built only as a module. Here is something of a template/recipe for native build (the directories are only examples…you should point at some temporary storage mount point so you have enough space…though I suppose if you have very little on the Jetson that something like a thumb drive would not be needed):

# --- Setting Up: -------------------------------------------------------
# DO NOT BUILD AS ROOT/SUDO!!! You might need to install source code as root/sudo.

# --- Notes: ------------------------------------------------------------
# It is assumed kernel source is at "/usr/src/sources/kernel/kernel-4.9".
# Check if you have 6 CPU cores, e.g., via "htop".
# If you are missing cores, then experiment with "sudo nvpmodel -m 0, -m 1, and -m 2".
# Perhaps use "htop" to see core counts.
# Using "-j 6" in hints below because of assumption of 6 cores.
# -----------------------------------------------------------------------

mkdir -p "${HOME}/build/kernel"
mkdir -p "${HOME}/build/modules"
mkdir -p "${HOME}/build/firmware"

export TOP="/usr/src/sources/kernel/kernel-4.9"
export TEGRA_KERNEL_OUT="${HOME}/build/kernel"
export TEGRA_MODULES_OUT="${HOME}/build/modules"
export TEGRA_FIRMWARE_OUT="${HOME}/build/firmware"
export TEGRA_BUILD="${HOME}/build"

# Compile commands start in $TOP, thus:
cd $TOP

# Do not forget to provide a starting configuration. Probably copy of "/proc/config.gz",
# to $TEGRA_KERNEL_OUT, but also perhaps via the make target of "`tegra_defconfig`",
# plus your own edit after initial config:
make O=$TEGRA_KERNEL_OUT nconfig

# If building the kernel Image (not needed if only modules being built, but recommended
# for testing if all is well before building modules):
make -j 6 O=$TEGRA_KERNEL_OUT Image

# If you did not build Image, but are building modules (only needed if "Image" was not built):
make -j 6 O=$TEGRA_KERNEL_OUT modules_prepare

# To build modules:
make -j 6 O=$TEGRA_KERNEL_OUT modules

# To build device tree content (NOT NEEDED BUT SHOWN):
make -j 6 O=$TEGRA_KERNEL_OUT dtbs

# To put modules in "$TEGRA_MODULES_OUT" (then one would look at the path within this
# subdirectory to see where your kernel module was placed, and copy it manually to the
# similar path at "/lib/modules/$(uname -r)/kernel", noting that we are using the "uname -r"
# of the new kernel, not the old kernel...perhaps it is the same if using just a module, but if
# a new Image is being installed, then "uname -r" would have changed):
make -j 6 O=$TEGRA_KERNEL_OUT INSTALL_MOD_PATH=$TEGRA_MODULES_OUT

# To put firmware and device trees in "$TEGRA_FIRMWARE_OUT" (NOT NEEDED FOR YOU
# BUT SHOWN):
make -j 6 O=$TEGRA_KERNEL_OUT INSTALL_FW_PATH=$TEGRA_FIRMWARE_OUT

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.