Building FPGA driver for TX1

I put a custom XMC FPGA card on PCIe/XMC carrier card and plugged in TX1 PCIe slot, the FPGA card was detected by “lspci”.

I tried to build driver on TX1 from driver source by “.configure” and “make” but got errors, I figured out it’s because the kernel is 64 bits but rootfs and compiler are 32 bits.

I installed 64 bit compiler using dusty_nv’s sript but still got errors, e.g.,

scripts/basic/fixdep: 1: scripts/basic/fixdep: Syntax error: "(" unexpected

Failed to natively build driver, I tried to cross compile the driver from an x64 ubuntu PC by

“export ARCH=arm64”
“export CROSS_COMPILE=/usr/bin/aarch64-linux-gnu-”

“./configure -sysroot [path to l4t r23.1 root] -kernel [path to r23.1 kernel]”

“make”

and was able to build the driver without errors

I copied driver package to TX1 and ran “make install” without error, after that, I have

“vendor.so” in “/lib/modules/3.10.67-gcdddc52/kernel/drivers/addon/vendor”,

udev rule files in “/etc/udev/rules.d”

api lib modules in “usr/lib” with symbolic links,

all seem OK from driver installation instructions.

But when I tried “sudo modprobe vendor”, I got

modprobe: ERROR: could not insert 'vendor': Exec format error

“uname -r” showed “3.10.67-gcdddc52”

“modinfo vendor” showed

filename:       /lib/modules/3.10.67-gcdddc52/kernel/drivers/addon/vendor/vendor.ko
license:        GPL
depends:        
vermagic:       3.10.67-gcdddc52-dirty SMP preempt mod_unload aarch64
pa

The same FPGA card and driver package were used for different x64 PCs without issue.

Could this issue be caused by TX1’s 64/32 bit kernel/rootfs or something else?

Thanks for any input.

I’m not sure which kernel parts of cross-compile use the 64-bit compiler, versus 32-bit, but since both are used for a JTX1 kernel cross-compile I’d assume there is some 64/32-bit “glue” in there which may be why this failed. You might try the same cross-compile with ARCH=arm and naming the 32-bit cross compiler instead…or providing both the CROSS_COMPILE and CROSS32CC with the original ARCH=arm64. If your kernel code never touched anything outside the kernel, it could probably all run on 64-bit, but once anything outside the kernel is touched it might also need 32-bit.

Unfortunately vendor supplied source/makefile/configuration do not specify which part of code runs in 64 bit kernel space and which part of code runs in 32 bit user space.

Mixed 64bit/32bit kernel/user space is nightmare for development.

I may have to wait until Nvidia releases 64 bit rootfs.

Even then, I’m not sure if TX1 memory supports DMAs from FPGA through PCIe bus.

For PCIe and DMA, you may find this of interest:
https://devtalk.nvidia.com/default/topic/902303/jetson-tk1/dma-transfer-between-jetson-tk1-and-pcie/

That’s very helpful link. Our FPGA card does have DMA engine and driver/SDK support it. It seems FPGA to TX1 memory DMA transfer is possible. I’ll see how to make FPGA driver work.

Thanks