Upgrade procedure


I know that JetPack 4.6.0 will be released soon. Is there an upgrade procedure to allow us to upgrade from 4.5.1 to this new release?


Hello @malcolm.mcdowell ,

I hope you are doing well.

The OTA update feature, supported since Jetpack 4.5, enables the user of NVIDIA Jetson platforms to update their devices. Instead of going through the process of re-flashing a board with a new image of the latest Jetpack version, the user can now simply perform the update, as long as the device supports it. This provides benefits such as the option to maintain files after the update, customize the file system before updating and a fail safe update procedure.

For more information on this subject, you can check the NVIDIA Jetson Linux Developer Guide for JP4.5 on this link:

best regards,
Andres Campos
Embedded Software Engineer

Hi Andres,

Thanks. I did review this link earlier today. I am getting familiar with the NVIDIA software (structure and organization).

Some background. We will be doing our own development for a custom board. We have taken the JetPack 4.5.1 with kernel 4.9 and added custom drivers, device tree changes to match our hardware. With Jetpack 4.6 being released soon and when new releases are provided in the future, I want to support the changes of the new release with the least effort/changes.

To include the updates, what is the best procedure? Do we have to rebase the existing code with the new JetPack release or how should we structure our code to easily add or move our code to the newest release?



Thanks for sharing that information.

You are working with a custom device tree, as well as a custom kernel and/or kernel modules (depending on how you are including your drivers). Therefore, when upgrading to a new Jetpack version, there is no other way around that to customize those new files too.

With our customers, we evaluate the advantages of moving to a newer Jetpack version in order to determine whether the porting of drivers and other changes are worth it. In some cases, there is no major changes that could affect the functionality of the drivers or any other components, however is something to be considered when evaluating the risk.

To prepare your software for a Jetpack upgrade, I would recommend you use a version control system such as Git. This would allow you to cleanly separate those changes that you performed to the Jetpack sources, from the base code provided by NVIDIA. Then, when the moment comes, you can simply generate a patch consisting of your custom changes and work from there to port them to the new sources, always maintaining a clean version control. If there are not major changes from one Jetpack to the other that affect your customization, the porting would be a very simple process.

The OTA update, when performing a major version jump, such as from JP4.5 to 4.6, would only give you advantages such as the ones that I mentioned on my original response. However, it would not eliminate that extra step of customizing your system.

best regards,
Andres Campos
Embedded Software Engineer

Thanks Andres. Is the planned release for Jetpack 4.6.0 still this monthe?


Hello Malcolm ,

Unfortunately I am unable to help you with that question, since I am not an NVIDIA employee. I would advice you to wait for an NVIDIA representative to helps us with that information.

best regards,
Andres Campos
Embedded Software Engineer

Hi @malcolm.mcdowell, yes it is still planned for later this month or early next month.

Hi Malcolm,

You just touched a really interesting topic, moreover if you are just getting up to date with the software stack this could be of your interest.

Depending of your internal development scheme and your application requirements you will be able to choose between two different build/development environments:

a) Jetpack: You can develop using Jetpack and work on your application and features natively in the system and you should be already doing right now.

b) Yocto: There is a community project named meta-tegra (GitHub - OE4T/meta-tegra: BSP layer for NVIDIA Jetson platforms, based on L4T) that allows to develop in the Jetson platform using Yocto. The main advantages of working on Yocto is that you can cross-compile everything before deployment, make your build easy to reproduce on any build server, and keep detailed track of what features and packages are being installed in your system. The only downside is that you won’t have a the Ubuntu UI as in the Jetpack, so if the user interface and desktop support is critical for your project, Jetpack should be the way to go.

Here is a quick guide on how to use meta-tegra for your convenience in case you are interested:

Now that we covered the topic of software control and reproducibility for development, the next one is system updates. For this you have different options depending of what are you looking for:

a) For jumping between one Jetpack version and another there is the option of OTA which will allow you to update your system to the next release. This is what Andres already provided.

b) For more common system updates such as a kernel update, an application update, etc, you can use Mender (supported in Yocto) or even create your own update mechanism. Maybe you should wait for JP 4.6 and see of there is any new feature that can help you with this as well :)

Hope this helps in your new journey using the NVIDIA platforms.

Best Regards,

Thanks you everyone for your great feedback!



I have used Yocto to get a build complete.

I would like to understand how to create a JetPack release (a above) from from source as a comparison. Can JetPack 4.5.1 be built from source? I have downloaded the source for the kernel/uboot from the following link:
L4T | NVIDIA Developer. I was able to get them compiled.

For the root file system, what should be used? I see a Sample Root Filesystem Source (https://developer.nvidia.com/embedded/l4t/r32_release_v5.1/sources/ubuntu_bionic-l4t_lxde_aarch64_src.tbz2)

What other software components make up JetPack 4.5.1 and where can the source be located?

If you have a link that describes this build process for the Xavier NX board, that would be great.


I can only answer a tiny amount of this, but the sample root filesystem is a purely Ubuntu 18.04 source for arm64. The program in the flash software, “apply_binaries.sh”, is then run (with sudo) to add NVIDIA-specific content. Mainly drivers, e.g., plain vanilla Ubuntu does not have a GPU driver. This, plus some boot content, is how the default rootfs image (APP) is created and flashed.

Yocto would basically replace the sample rootfs, but there is likely a lot of content there which is directly compatible with Yocto. Unfortunately though, you will find content being added mainly through QEMU using the Ubuntu “dpkg” (or “apt”) mechanism in order to add that content or to manage the rootfs.

When I mentioned boot content this is to say that some content is added to the “/boot” of the APP image based on arguments passed to the flash software, e.g., arguments can change which extlinux.conf content is present, or which device tree is used.

One of your biggest challenges will be to keep an Xorg server with the same ABI as the one used in this particular Ubuntu 18.04 install. The reason for this is that the GPU driver is loaded by the Xorg server, and the driver is itself in binary format. You cannot recompile the GPU driver for a different ABI. There are actually several drivers loaded this way by Xorg, but to get an idea of what ABI is used, run command “grep 'ABI' /var/log/Xorg.0.log”, and look for the “Video Driver”.


I am reviewing the NVIDIA_Jetson_Linux_Driver_Package documents. MY goal is to modifying the open source files and create new Debian packages. I am working on the Xavier NX.

This document says

“Three of the four kernel packages are are in public_sources.tbz2. This archive contains another archive named kernel_src.tbz2, which in turn contains three directories of header files:

I have extracted the kernel_src.tbz2, but don’t see the above mentioned nvidia-l4t-kernel-xxx files.

Are these files related to the Nano or Jetson Tx only?


I am not familiar with the document which mentions these. I can tell you though that I would normally unpack the full kernel source natively onto a Jetson, and configure this with “/proc/config.gz” plus edit of CONFIG_LOCALVERSION, and then I could use this for building if I just want modules or if I want the full kernel build. I do not use a separate header directory in the case of Jetsons.


I have been updating the pinmux configuration using the Jetson_Xavier_NX_Pinmux_Configuration_Template_v1.06 spreadsheet. My first attempt is to simply generate the default pinmux from this spreadsheet and update my kernel.

When I copy the files over and run pinmux-dts2cfg.py, I get a few errors like this one below:
ERROR: pin dap3_sclk_pt1(0x00000440) field nvidia,enable-input(0x00000040) is not matching, val = 0x01 expected = 0x00.

I was assuming running the default values would not produce any errors. Can you tell me why I am getting the error above and how to fix it?

Also. Once the .cfg files are generated, where do these .cfg files get copied to and what is the naming convention. If you can point me to a link that would describe this that would be great. I didn’t see one in my initial review of the documentation.


That’s one I don’t know the answer to. I’m guessing someone at NVIDIA would know this (somewhat obscure) error though.

hello malcolm.mcdowell,

you should remove --mandatory_pinmux_file options for confirmation.
please also refer to Pinmux-dts2cfg.py errors with default pinmux - #5 by JerryChang.

Thanks. That did work

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.