How to upgrade CUDA version for Jetson Xavier NX

Hi guys!

I have received the following Jetson Xavier NX:

NAME=“Ubuntu”
VERSION=“18.04.6 LTS (Bionic Beaver)”
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME=“Ubuntu 18.04.6 LTS”
VERSION_ID=“18.04”

R32 (release), REVISION: 5.2, GCID: 27767740, BOARD: t186ref, EABI: aarch64, DATE: Fri Jul 9 16:05:07 UTC 2021

Package: nvidia-jetpack
Version: 4.5.1-b17
Architecture: arm64
Maintainer: NVIDIA Corporation
Installed-Size: 194
Depends: nvidia-cuda (= 4.5.1-b17), nvidia-opencv (= 4.5.1-b17), nvidia-cudnn8 (= 4.5.1-b17), nvidia-tensorrt (= 4.5.1-b17), nvidia-visionworks (= 4.5.1-b17), nvidia-container (= 4.5.1-b17), nvidia-vpi (= 4.5.1-b17), nvidia-l4t-jetson-multimedia-api (>> 32.5-0), nvidia-l4t-jetson-multimedia-api (<< 32.6-0)
Priority: standard
Section: metapackages
Filename: pool/main/n/nvidia-jetpack/nvidia-jetpack_4.5.1-b17_arm64.deb
Size: 29372
SHA256: 378f7588e15c35692eb1bed6f336be74f4f396d88fad45af67c68e22b63be04b
SHA1: e41f26a3d8326e9952915eee12fa37e17de3245f
MD5sum: 31b2bd9d0f214f74acaeb3d8e4279e9d
Description: NVIDIA Jetpack Meta Package
Description-md5: ad1462289bdbc54909ae109d1d32c0a8

Package: nvidia-jetpack
Version: 4.5-b129
Architecture: arm64
Maintainer: NVIDIA Corporation
Installed-Size: 194
Depends: nvidia-cuda (= 4.5-b129), nvidia-opencv (= 4.5-b129), nvidia-cudnn8 (= 4.5-b129), nvidia-tensorrt (= 4.5-b129), nvidia-visionworks (= 4.5-b129), nvidia-container (= 4.5-b129), nvidia-vpi (= 4.5-b129), nvidia-l4t-jetson-multimedia-api (>> 32.5-0), nvidia-l4t-jetson-multimedia-api (<< 32.6-0)
Homepage: http://developer.nvidia.com/jetson
Priority: standard
Section: metapackages
Filename: pool/main/n/nvidia-jetpack/nvidia-jetpack_4.5-b129_arm64.deb
Size: 29360
SHA256: 002646e6d81d13526ade23d7c45180014f3cd9e9f5fb0f8896b77dff85d6b9fe
SHA1: cb17547b902b2793e0df86d561809ecdbf7e401f
MD5sum: 06962c42e462f643455d6194d1a2d641
Description: NVIDIA Jetpack Meta Package
Description-md5: ad1462289bdbc54909ae109d1d32c0a8

Jetpack Version:
CUDA Version 10.2.89

We want to run a python code that uses ultralytics, opencv and tensor in the source code. The problem that we found is:

Ultralytics doesnt work on python 3.6.9
but
If we install python 3.9 or 3.8, the version of torch that we have to install for python 3.8 is not compatible with Jetpack 4.5.1

Jetpack 4.5.1 > Python 3.6.9 > OpenCV ✔️ > Torch ✔️ > Ultralytics ❌

Jetpack 4.5.1 > Python 3.9 > OpenCV ✔️ > Torch ❌ > Ultralytics ✔️

What are your possible solutions for this:

Can I Install a newer version of CUDA that allow me to install a newer version of python??
According to this link:

The version of Pytorch depends on Jetpack version? Why??

Thanks everyone!

Hi @jaime_arroyo, I recommend upgrading/re-flashing your Xavier device with JetPack 5. Then you can use the PyTorch wheels for Python 3.8

It’s because PyTorch is built with the version of CUDA/cuDNN that is in JetPack, and if you change versions (other than slightly minor bumps in the version numbers), you need to re-compile PyTorch against that new version of CUDA/cuDNN. This is why that when you normally install PyTorch on a PC, you need to pick the version of CUDA that you have installed for it.

“Hi @jaime_arroyo, I recommend upgrading/re-flashing your Xavier device with JetPack 5. Then you can use the PyTorch wheels for Python 3.8”

Hi @dusty_nv It is possible to upgrade without flashing? Do you have the doc to follow the steps??

“ It’s because PyTorch is built with the version of CUDA/cuDNN that is in JetPack, and if you change versions (other than slightly minor bumps in the version numbers), you need to re-compile PyTorch against that new version of CUDA/cuDNN. This is why that when you normally install PyTorch on a PC, you need to pick the version of CUDA that you have installed for it.”

@dusty_nv what I understood from this is I can upgrade CUDA/cuDNN version without upgrade Jetpack and install the rigth version for the new version of CUDA/cuDNN. Do you have a doc with the correct instructions for do so??

Thanks for reply as soon as you can!!
Best

For upgrade from JetPack 4 to JetPack 5, this is a major version change, and you should re-flash your SD card or eMMC.

Hi @dusty_nv.

I have a Lenovo ThinkEdge SE70 With Jetson Xavier NX.
Do I have to follow the following link to flash?

If not, do you have one?

Best,

What gets flashed is “L4T” (“Linux for Tegra”), which is just Ubuntu plus NVIDIA drivers. JetPack is what flashes (and is paired with SDK Manager, the network layer on top of that). If you pick an L4T release, you’ve also basically picked a JetPack release. You will want to pick the most recent L4T, and then use the paired JetPack/SDKM release. The URL to the listing of L4T releases and the URL for the JetPack/SDKM releases end up at the same place. I suggest go to the L4T release page and pick the most recent compatible release:

That has documentation and the needed software. You’ll want your host PC to be running Ubuntu 20.04 (18.04 will also work), which is mostly a dependency of the GUI to the flash software. JetPack can install CUDA and some other options (after flash the Jetson automatically reboots, then there is first boot account setup, and JetPack completes the optional content over ssh using that fully booted account login). You might need to flash the Jetson itself (QSPI content) and an SD card separately on those models.

Thanks @linuxdev for your reply and easy explantion how to go forward.

Just to be sure (dude, Im newbie).

  1. Download Root FileSystem from this URL: https://developer.nvidia.com/downloads/embedded/l4t/r35_release_v4.1/release/tegra_linux_sample-root-filesystem_r35.4.1_aarch64.tbz2/ or Driver Packages?
  2. Follow this guide to flash:
    Flashing Support — Jetson Linux Developer Guide documentation

About point 2. Which part do you recommend me to follow? Can I use SD Card to do it if I dont have a linux host? (My own pc is MAC M1).

Thanks

If you use JetPack/SDK Manager you don’t need to download anything. It downloads and installs it all.

If you want to know more about what actually goes on that you don’t see, the history is that JetPack did not always exist. Everything was flashed on command line from a Linux host PC. The Jetson itself becomes a custom USB device when in recovery mode, which implies it needs a custom USB driver (many people think flash turns everything into a bulk/mass storage device, but this is very far from correct). That flash software is appropriately known as the “driver package”.

The driver package needs to generate a partition to flash. The content which goes on this partition starts with the “sample root filesystem”, but preparing it requires first running the “apply_binaries.sh” script (with sudo; and the sample rootfs would need to be unpacked correctly with sudo on a native Linux filesystem type). The apply_binaries.sh script is the part which adds the NVIDIA content to what would otherwise be a purely Ubuntu rootfs. One of the reasons for this is that NVIDIA is not modifying the Ubuntu filesystem; instead, it is the end user doing this, which makes it easier so far as EULAs and licenses go.

Once the rootfs is correctly prepped you don’t need to run apply_binaries.sh again (assuming this is manually performed, but JetPack does this for you). At some point the GUI app JetPack was created to allow for a few things the command line does not do. The driver package does not have the ability to install optional packages, e.g., CUDA, and with command line you’d need an alternate method to install this using the version specific to Jetsons. JetPack also has the ability to use ssh on a fully booted Jetson to install that optional content once the system boots and you have a login account set up.

However, JetPack (by itself) does not have networking abilities. That’s why SDK Manager was added. This is the layer which helps to find and match software that JetPack can work with for a specific release, and then download it without the user having to find everything. So JetPack with SDKM implies you don’t download anything else, it does all of this correctly. You don’t need to download or deal with the sample rootfs with that method. If you want to flash on command line just ask, the steps differ, but if you are not manually flashing on command line, just use JetPack/SDKM. Incidentally, even if you flash on command line, JetPack/SDKM installs that too, so you could use JetPack/SDKM to download, and then still flash on command line.

The flash software itself will create a partition image at the time of flash (at least for eMMC models; SD card models are newer an have different procedures since flash to the module is boot content only in QSPI memory, with the o/s on the SD card as a separate manual step).

If you’ve installed the flash software (via JetPack or manually), then you will have a “Linux_for_Tegra/” subdirectory (for JetPack this will be at “~/nvidia/nvidia_sdk/JetPack...version.../Linux_for_Tegra/”). With the exception of the “rootfs/” subdirectory this is the “driver package”. The “rootfs/” is populated with a purely Ubuntu filesystem, then modified to contain the NVIDIA content. This is almost an exact match to what gets flashed.

During manual flash (or indirectly during JetPack flash) the command line arguments will cause the kernel and device tree and perhaps some firmware to be added in to the “/boot” content (or related content) during flash. In the “Linux_for_Tegra/” directory you will notice several “*.conf” files. The ones starting with the “jetson” in their name are symbolic links which a more common name for the hardware pointing at other files which are named after the exact model names (in a technical format, not the human format). That technical format has two components: One for the module model, the other for the carrier board model. To see this check out “ls -l *.conf” from “Linux_for_Tegra/”. Command line flash targets are any of those .conf names with the .conf removed.

Jetsons don’t have a BIOS. What they do have is the equivalent in software. For eMMC models, that BIOS equivalent and the boot content is in signed partitions. For SD card models, that content is in QSPI memory on the module itself. The rootfs of eMMC models goes into an eMMC partition, and the rootfs of the SD card models goes on the SD card. The SD card model flash is typically aimed at QSPI flash. Notice in the earlier “ls -l *.conf” that there are some targets with “qspi” in the name? An example is this target/conf file:
jetson-xavier-nx-devkit-qspi.conf

When you flash a Jetson you are not just flashing the o/s, you are also flashing the equivalent of BIOS and boot content. This is a big reason why one might need to flash the Jetson module itself for SD card models…so the BIOS setup is correct before handing off to the o/s.

Many many cases on SD card models involves a flash command which might name an SD card, but not really flash the SD card. The boot content needs a pointer in QSPI to the boot device, and so the flash of QSPI might name an SD card, and flash to put a pointer in the QSPI, but not actually create an SD card. There is other documentation if you wish to create SD cards from that same release (many people just use the SD card image which is also available from that release page).

That documentation URL you found is correct, but that is for manual flashing. If you download the SDK Manager from the correct release version, and install that on the host PC, you then run “sdkmanager” on command line, without sudo, and sdkm then performs all of the downloads and setup and flash without needing to know about the flash.sh commands.

Do note that earlier on in history there was a default login name and password. At some point California law stopped allowing this default setup due to so many things being hacked based on never resetting a default. At that point NVIDIA changed to a first boot account setup whereby you are asked for a login name and pass to create during the boot, e.g., while directly connected to the Jetson or via serial console (which is sort of directly connected). This has to be completed before JetPack can finish installing the optional software, e.g., CUDA.

As an alternative, from your “Linux_for_Tegra/” directory, you can run this to pre-create the login:
sudo ./tools/l4t_create_default_user.sh
(older releases did not have this in the “tools/” directory)

That script will ask you for a login name and password to set up in the rootfs/ of the host PC. From that point on every flash will have that account name and pass already set up. This means it is not a default shipped password if it is the end user who sets this up. A company mass producing and using that and then shipping to California might have some problems.

So just use sdkmanager. Don’t bother with the driver package or sample rootfs download.

This is a perfect explanation. I tried with sdkmanager. The problem is there is no sdkm version for arm64 (lenovo thinkedge comes with ubuntu 18.04 with arm64 arch)

So, i think that the only way is to do manually no?

Bes

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.