TensorRT 4.0 Install within Docker Container

Hey All,

I have been building a docker container on my Jetson Nano and have been using the container as a work around to run ubunutu 16.04. This way I can implement a project that highly depends on packages that are only compatible with 16.04. I have been executing the docker container using a community built version of the wrapper script that allows the container to utilize the GPU like nvidia-docker but for arm64 architecture.

This project depends on basically all of the packages that are included in jetpack 3.2 and that includes things like CUDA 9.0, cuDNN 7.1, and TensorRT 4.0. With the help of these forums I was able to successfully build CUDA 9.0 and cuDNN using a arm64 .deb file inside of the container. I was NOT able to find any similar .deb file for TensorRT 4.0 however.

That was a huge bummer because that is the only thing standing between me and a fully functioning autonomous drone. Very frustrating that nvidia does not provide a library for the arm64 versions of CUDA, cuDNN or TensorRT. They only provide it for x86_64 and aarch64 I believe which is pretty much useless for anyone trying to work directly from a jetson device. More arm64 libraries would be much appreciated.

Does anyone know how I might be able to find a work around to install TensorRT 4.0 inside of my docker container? I would be very appreciative of any help anyone can offer.


  • akrolic

[s]There are .deb files in the JetPack download folder when you run the Sdk manager installer. You should be able use those to install those deps In a container, but I agree and have mentioned my desire for an apt repository previously. It makes it a lot easier. Now you have to hack a bunch of dependencies into an image in one ugly manner or another.

Neverfear, however, as Nvidia-docker support is coming soon (and with it presumably, images with all this stuff baked in). You might consider designing your image on x86 nvidia docker for now since many of the same images should work with few modifications if any other than a build for a different architecture.[/s]

Edit: all the above has been done. Please search the forum for nvidia-docker on Tegra.

Hey mdegans,

I was alerted to the nvidia-docker upgrade coming in with jetpack 4.2.1 by your post. I imagine they will only link the libraries for Cuda, cuDNN, and tensorRT from your host to the container. In that case I would still have to come up with a solution to install specific versions using the .deb packages. I will take a closer look into the SDK manager and see if I cant get a hold of tensorRT for arm64.


I really doubt they would bind mount those dependencies like some are doing. (edit: they did). They don’t do it on x86 and there would be no reason to ok arm64. Among other good reasons not to use bind mounting, it can expose the host to a directory traversal attack.


[s]Hey. If you want TensorRt installed on the image, a simple way might be to flash the JetBot image if you have a 64 gb card or larger. Otherwise I think it’s either already installed, in the .deb files, or pinned on this website.

Cuda, cudnn are born in .deb packages I am 99% certain as I installed them that way just recently.[/s]
edit: currently you can use the online apt repos to install everything. You can “apt search nvidia” for most of what’s available

The first comment has me curious, is it possible to flash the Jetson Nano with a different OS? I’ve seen a thread before but how did you accomplish @akrolic ?

It’d be nice to have CentOS on the Nano and run Docker containers like you mention.

The Linux for Tegra documentation (see “Setting up your File System”) details how to change your rootfs with something else (like another distro), however unless it’s Ubuntu based (like Ubuntu Base), the OTA updates probably won’t work.

I don’t know whether the .deb packages are flexible enough the way they’re writen to work with other Debian based distros, but you could test. Swapping out a rootfs isn’t a small thing, however, and you’ll likely have to know a lot about Linux in general, and the disto you’re planning on using, to get things working. It’s certainly doable with enough work, however.

Re: Centos on Docker on Nano. It runs out of the box.

$ sudo docker run -it --rm centos:latest
Unable to find image 'centos:latest' locally
latest: Pulling from library/centos
d6d1431672e7: Pull complete 
Digest: sha256:fe8d824220415eed5477b63addf40fb06c3b049404242b31982106ac204f6700
Status: Downloaded newer image for centos:latest
[root@f735d27be7df /]# yum install nano (comment: <i>the text editor</i>)
Failed to set locale, defaulting to C.UTF-8
CentOS-8 - AppStream                             11 MB/s | 4.8 MB     00:00    
CentOS-8 - Base                                 890 kB/s | 3.2 MB     00:03    
CentOS-8 - Extras                                15 kB/s | 2.1 kB     00:00    
Dependencies resolved.
 Package        Architecture      Version                Repository        Size
 nano           aarch64           2.9.8-1.el8            BaseOS           579 k

Transaction Summary
Install  1 Package

Total download size: 579 k
Installed size: 2.2 M
Is this ok [y/N]: y
Downloading Packages:
nano-2.9.8-1.el8.aarch64.rpm                    290 kB/s | 579 kB     00:01    
Total                                           271 kB/s | 579 kB     00:02     
warning: /var/cache/dnf/BaseOS-01ed9fc6ac393b86/packages/nano-2.9.8-1.el8.aarch64.rpm: Header V3 RSA/SHA256 Signature, key ID 8483c65d: NOKEY
CentOS-8 - Base                                 1.6 MB/s | 1.6 kB     00:00    
Importing GPG key 0x8483C65D:
 Userid     : "CentOS (CentOS Official Signing Key) <security@centos.org>"
 Fingerprint: 99DB 70FA E1D7 CE22 7FB6 4882 05B5 55B3 8483 C65D
 From       : /etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial
Is this ok [y/N]: y
Key imported successfully
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                                        1/1 
  Installing       : nano-2.9.8-1.el8.aarch64                               1/1 
  Running scriptlet: nano-2.9.8-1.el8.aarch64                               1/1 
  Verifying        : nano-2.9.8-1.el8.aarch64                               1/1 


[root@f735d27be7df /]#

Excellent! Thank you for such a quick response. It’s always a toss-up posting in forums these days.

It’s nice to know that it is possible to use another OS. Using the CentOS Docker could be equally useful.

I haven’t tried using it with nvidia-docker specifically, so I don’t know if the the GPU/CUDA stuff would work (it might, it might not), but you can certainly run centos stuff on the cpu, like some database. You would have to check the nvidia-docker documentation for Tegra and run tests as far as CUDA stuff is concerned.