cuda9 for JetsonTX2

Hi,

JetsonTX2 development kit which I am using has third party OS(other than Ubuntu). And I am interested in installing CUDA9 on it.

To install CUDA9 on my platform, Is it required to cross compile cuda9 for the target OS? Is there any document which details on setup/install procedure.

Thanks in advance.

CUDA9 itself isn’t build from source, the binaries are provided/installed by JetPack. If you are using an alternate OS, what you can try is having JetPack download the packages to your host (they will be stored under jetpack_downloads/ directory) and manually copying them and installing them on your Jetson yourself.

What can be compiled natively or cross-compiled are the CUDA toolkit samples. JetPack will cross-compile them from the host to save time. However you can build the samples natively on your Jetson just as well.

Thanks for the response.

Is there any document which details about manual installation of CUDA9 and TensorRT3.0 to Target board without Jetpack utility.

We have a requirement to make CUDA9 and TensorRT3.0 as part of edgeOS image. We have EdgeOS source built for jetson TX2 and are looking for procedure to integrate Nvidia libraries into edgeOS.

Since it’s typically installed by JetPack, there isn’t a document for it particular to Jetson, but it is performed the same way that desktop CUDA and TensorRT DEB packages are installed, so consult their documentation too. It should just be relatively simple deb -i commands followed by apt-get install.

FYI, adding the packages without JetPack is not supported…but you can still do it if you are willing to go through some manual steps.

Just to note, on a Jetson TX2 with R28.2 and several packages installed, you’ll find several local repositories. Normally “apt-get” looks to repositories over the internet…it is possible to have a local file-based repository, and this is what the “repo” packages add. Once the “repo” package is added you can freely add any package within it. This R28.2 TX2 has:

/var/cuda-repo-9-0-local/
/var/nv-tensorrt-repo-ga-cuda9.0-trt3.0.4-20180208/
/var/visionworks-repo/
/var/visionworks-sfm-repo/
/var/visionworks-tracking-repo/

Within these are all of the available “.deb” packages, along with the typical repo files about versioning and checksums.

Within a JetPack’s installation (at least after it is used once) there will be file “repository.json”. This contains XML-based descriptions of packages and their URLs. You could use wget to download based on those URLs (but make sure it is for the right Jetson and release). An exact copy of the installed “/var/whatever” directories of an actual Jetson would be a better choice than a lot of wget operations.

Note that dpkg on the command line can’t resolve dependencies. Using “apt-get” with local repositories you can do this. When JetPack installs packages, and if you use wget for manual install, you will always start with the “repo” packages. Once “repo” packages are in you can use ordinary “apt-get” commands even if there is no internet.

Our requirement is to make Cuda9 and TensorRT3.0 libraries as part of edgeOS build image for Jetson TX2. Users who flash this built EdgeOS image on JetTX2 board should able to get working Cuda and TensorRT.

Currently Nvidia providing Cuda and TensorRT applications either in distribution packages(deb or rpm) or as a run file to install it manually. However the EdgeOS has limited capabilities, It lacks tools to install applications/libraries. So is it possible to make these libraries/drivers as a part of EdgeOS build system?

Can Nvidia to provide all needed binaries/drivers and all other configuration details, which can be placed into EdgeOS file system during build and be made part of the build image??

EdgeOS project is using Yocto project.

I don’t know EdgeOS, but what I’d advise is copy those repos to your host and use tools to extract them into an alternate location. Then see what is in that alternate location after you unpack. If you check JetPack logs during install of the repos and manually check the file system changes from manually installing to alternate locations, then you know the order of file copy. Basically it is a bit like old Slackware when Linux first came out.

Beware you will also need the files from the driver packages “apply_binaries.sh” script…at least those from “Linux_for_Tegra/nv_tegra/nvidia_drivers.tbz2”. The libglx.so will require the X server to use the correct ABI, and CUDA will require this as well (it doesn’t mean you have to run a desktop, but for many uses CUDA uses X as an interface to the video hardware).

I can’t say, but I strongly doubt you’ll see a distribution of the bare files in the near future in a run script since there are so many dependencies. Such a distribution would be quite useful, e.g., it allows me to run much of this on Fedora 27 when there are no other Fedora 27 releases (there is still an issue with gcc 7 being rejected by CUDA). Once you know how the “.deb” packages are placed you can script this, probably sooner than waiting for a run file format.

I got cuda9.0-L4T-xxx.deb package from jetpack3.2 download. Is this package sufficient to enable cuda manually on Tx2? Extraction of this package manually into root enables cuda?

Manual installation of nvidia libraries is not easy. Is it possible to get yocto project support from Nvidia to fetch and install needed libraries and versions?

Only the Ubuntu provided with L4T/JetPack is directly supported. One of the issues you will run into in any kind of alternate Linux distribution is the package format itself. Aside from drivers installed during flash the package you would normally start with is the CUDA repo package…this is what puts a local repository in “/var” which “apt-get” can then use to install from. However, those files are all in “.deb” format, so instead of installing the repo you probably need to use various Debian/Ubuntu tools and extract each of those “.deb” packages into an alternate archive directory, and then manually copy them as unmanaged files into your local file system (or you could convert to some other package format).

The actual binary files in the host side driver package “Linux_for_Tegra/nv_tegra/nvidia_drivers.tgz” file are the drivers….all of those other packages basically interface with these drivers. Nothing else can succeed until those drivers are in place (you can simply extract this to some temp directory and explore what is in them…normally this would be unpacked at “/” of the Jetson…or “rootfs/” for the sample rootfs when used with “apply_binaries.sh”).

Note that the Xorg server has an ABI which the video driver is compiled against, and often CUDA needs this. You will save yourself some effort and grief if you use an X server with the same video/GPU ABI.