Install debian package on TX1 target file system

Hi,

Nvidia has a vanilla target file system for TX1. Is it possible to install for example a .deb package ( for example Cuda ) on this file system?
The purpose is to create a complete file system on the host and flash it to TX1 without the need to run any post-installation (for example cuda debian pakcage) script.

Thanks.

'my guess would be the biggest hurdle would be the drivers. i’d wonder how “generic” they would be outside of the specific o/s they were developed in and for, that being their flavor of proprietary ubuntu. but good luck.

So what are my options to create a target file system with all required debian packages( for example libcudnn, opencv, libnvinfer)?
Another option is to copy to debian packages to the target (TX1) and install the packages after boot.

i’m not an expert on debian or ubuntu or the tx1 for that matter. but nvidia folks are wizards when it comes to getting specific packages to work with specific drivers all requiring specific libraries. my point is when you move outside that comfort zone you are probably leaving what they consider their “area of support” in that they most likely can not say one way or the other if what you want to do will work and depending on case load and verification needed to respond might not consider it in their job description. but, good luck.

The “.deb” files work normally on a Jetson. If you have the “.deb”, then it works like any other Ubuntu. What differs is the repository to find CUDA (and related) in, which in turn is a difference in how it is distributed.

A second difference is if the package adds to a non-tracked system location and adds or removes content which dpkg would be unaware of (not everything in “/usr/local/” will respond to name a “.deb” which owns the content even if a “.deb” file put the content there…it depends on the “.deb” which put it there).

Consider that CUDA9 has a repository, but it isn’t on the internet. If a Jetson has CUDA9, then package “cuda-repo-l4t-9-0-local” installed a set of files in “/var/cuda-repo-l4t-9-0-local/”, and then added this location to apt’s idea of where a repository is to search (examine the “/var/…repo…/” files…you’ll be quite interested in that). “cuda-repo-l4t-9-0-local” is not CUDA, but it is the repository containing everything CUDA9. Once this is in place (including “sudo apt update”) an “apt-get” command will find everything here the same as it will from a remote repository. There are a number of other “local” repositories provided, e.g., VisionWorks and TensorRT.

If you install the repo package directories to a Jetson, then you could recursively copy those to another computer (perhaps a computer acting as a server) and set those up to be used as a remote apt repository instead of local (I’ve not done this, I don’t normally use Ubuntu on PCs…I’m more of a Fedora guy). Beware you shouldn’t make those available to other people if it is against the EULA. If you can set JetPack to not delete intermediate files during a package install (not flash), then you may find that some files used to copy to the Jetson are preserved and useful (like the actual repo creation “.deb” which adds all the files within the repo). Looking closer, note that perhaps “repository.json” has updated, and that JetPack used this to find the packages it adds (meaning you might see if “wget” can download that URL manually)…and one of them is a “cuda-repo-_arm64.deb” file…which is in fact the file installed as a normal “.deb” which adds the “/var/cuda…” content. JetPack is nothing more than a scripted/specialized web browser.

One of the advantages of JetPack is that it understands dependencies. As soon as you work on “.deb” files manually the install order starts becoming a nuisance. However, if you managed to get all “*-repo” packages in place, then apt might be able to handle this…not sure. Regardless, you’d always start with the “repo” “.deb” file being installed with dpkg.

Here comes a really big problem: I do not know of a cross-development repo dpkg or apt tool. When you go to use dpkg on a PC which has the aarch64/arm64 packages it’ll think you want to install them to the local PC. It’ll want to use the PC’s install locations. Unless you’ve done something specific to make it possible, the local PC dpkg won’t allow you to work with purely arm64 packages not marked as a cross-tool. Worse, it won’t allow you to chroot to the “rootfs/” sample file system directory. I really do wish there were a cross-arch dpkg and apt available which would allow me to work on a repo not of the host operating system and not of the same architecture as the host…then all you’d need are packages and you’d be able to cross-dpkg install straight to the “rootfs/” using the schema from the rootfs location and not the host PC’s schema.

You could easily add/copy the repos to “rootfs/var/” once you have those, and update the apt configuration files in “rootfs/etc/apt/” so that these would be immediately available upon boot after an “sudo apt update”, but actually putting the final file in place prior to first boot would be problematic.

This is why I clone (clone is your friend! :P). After I’ve added everything to the Jetson, and I’ve updated all of the system, not just CUDA, so the clone is truly complete. I’ve tested it for boot. And if I want a copy of the “/var/…repo…” files I can just pull it off of a loopback mounted clone…I don’t need the Jetson running to get it. I can restore this image if anything goes wrong. If I update the Jetson, then I can loopback mount the clone and use rsync to copy updates to the clone. I can use a loopback clone as my sysroot for cross development.

Note that individual files intended to be part of the final root file system must be stored on a file system type which is native to Linux and thus understands file types like device special files and suid. An SD card could work, but you couldn’t use VFAT, NTFS, or FUSE. A clone image is just a binary file, and you can place a binary clone on a VFAT or NTFS partition…then the operating system can loopback mount this and the file system type within the clone will appear (ext4). So a clone file can easily be passed around and it won’t matter what operating system or file system type the clone image is stored on.