JetPack 4.2.1 - L4T R32.2 release for Jetson Nano, Jetson TX1/TX2, and Jetson AGX Xavier

NVIDIA is excited to have released JetPack 4.2.1 and L4T R32.2, the latest production release including new features and improvements for Jetson Nano, Jetson TX1/TX2, and Jetson AGX Xavier! See the Key Features for a list of highlights, including:

  • Headless initial system configuration via the flashing USB port or UART
  • NVIDIA Indicator Applet for nvpmodel performance power mode switching
  • The Jetson Zoo repository of popular open-source packages and DNN models
  • Improved memory available and boot-up power for Nano
  • Support for Ubuntu 18.04 aarch64 for Jetson TX1
  • Support for DeepStream 4.0 and ISAAC 2019.2
  • New beta features:
    • Xavier DLA support for INT8 in TensorRT
    • NVIDIA Container Runtime with Docker Integration

The NVIDIA SDK Manager can be used to install JetPack 4.2, including developer tools with support for cross-compilation.

For more information, please refer to the JetPack Release Notes and L4T Release Notes.

JetPack 4.2.1 components:

  • L4T R32.2 (K4.9)
  • Ubuntu 18.04 LTS aarch64
  • CUDA 10.0.326
  • cuDNN 7.5.0.66
  • TensorRT 5.1.6.1
  • VisionWorks 1.6
  • OpenCV 3.3.1
  • Nsight Systems 2019.4
  • Nsight Graphics 2019.2
  • SDK Manager 0.9.13

Download JetPackā€¦https://developer.nvidia.com/embedded/jetpack
Release Notesā€¦https://docs.nvidia.com/jetson/jetpack/release-notes/index.html
L4T Release Notesā€¦https://developer.nvidia.com/embedded/dlc/Tegra_Linux_Driver_Package_Release_Notes_R32.2.0_GA.pdf

1 Like

Awesome that there is support for nvidia-runtime now, but when will the NGC and Docker Hub NVIDIA containers support aarch64? I like the ā€œRun Anywhereā€ concept on https://www.nvidia.com/en-us/gpu-cloud/containers/

Hi, I downloaded the sdkmanager from JetPack SDK | NVIDIA Developer, and the version is sdkmanager_0.9.13-4763_amd64.deb. When I installed and ran in my host, It still only show version 4.2, not 4.2.1. How could I install the version 4.2.1 in my TX2 ?

This release is identical to the previous one and still has install issues.

sudo apt install ./sdkmanager_0.9.12-4180_amd64.deb
[sudo] password for xxxxx:
Reading package listsā€¦ Done
Building dependency tree
Reading state informationā€¦ Done
Note, selecting ā€˜sdkmanager:amd64ā€™ instead of ā€˜./sdkmanager_0.9.12-4180_amd64.debā€™
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:

The following packages have unmet dependencies:
sdkmanager:amd64 : Depends: libgconf-2-4:amd64 but it is not installable
Depends: libcanberra-gtk-module:amd64 but it is not installable
Depends: locales:amd64 but it is not installable
E: Unable to correct problems, you have held broken packages.

Problem still persists, cant install sdk manager and the file is the exact same one like last time, absolutely nothing was released and everything is broken.

Nvidia, did you try turning it off and on again?

Hi all, the updated debian package should be version sdkmanager_0.9.13-4763_amd64. Can you try downloading the latest from here:

[url]https://developer.nvidia.com/nvsdk-manager[/url]

If that link still downloads the previous version for you, there might be a CDN caching issue in your region, so please let us know where you are downloading from.

sudo apt install ./sdkmanager_0.9.13-4763_amd64.deb
[sudo] password for xxxx:
Reading package listsā€¦ Done
Building dependency tree
Reading state informationā€¦ Done
Note, selecting ā€˜sdkmanager:amd64ā€™ instead of ā€˜./sdkmanager_0.9.13-4763_amd64.debā€™
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:

The following packages have unmet dependencies:
sdkmanager:amd64 : Depends: libgconf-2-4:amd64 but it is not installable
Depends: libcanberra-gtk-module:amd64 but it is not installable
Depends: locales:amd64 but it is not installable
E: Unable to correct problems, you have held broken packages.

I am in Taiwan but seeing the same problem in Bulgaria.

sudo apt install ./sdkmanager_0.9.13-4763_amd64.deb
Reading package listsā€¦ Done
Building dependency tree
Reading state informationā€¦ Done
Note, selecting ā€˜sdkmanager:amd64ā€™ instead of ā€˜./sdkmanager_0.9.12-4180_amd64.debā€™
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:

The following packages have unmet dependencies:
sdkmanager:amd64 : Depends: libgconf-2-4:amd64 but it is not installable
Depends: libcanberra-gtk-module:amd64 but it is not installable
Depends: locales:amd64 but it is not installable
E: Unable to correct problems, you have held broken packages.

Got this with the brand new release, any solutions?

Solvedļ¼Œfirst I remove the previous sdkmanager by running the command: dpkg -r sdkmanager, then install new sdkmanager by running the command: dpkg -i ./sdkmanager_0.9.13-4763_amd64.deb, after that, everything is ok.

I have the same problem as turbocaff and mczemek.

Iā€™m trying to load SDK Manager on my host Lumbuntu 18.04.2 computer.
I think the problem is based on the fact that I have an i386 machine instead of an amd64.

Originally I had the exact same errors.
Since then:
I have successfully downloaded and installed libgconf-2-4_3.2.6-4ubuntu1_amd64.deb.
I have successfully downloaded and installed libcanberra-gtk-module_0.30-5ubuntu1_amd64.deb.

The sdkManager install no longer complains about them but it still complains about locales.
I have downloaded and installed locales-all_2.27-3ubuntu1_amd64.deb.
I have also downloaded and installed locales_2.27-3ubuntu1_all.deb.

But I still get the following error:
sudo dpkg -i ./sdkmanager_0.9.13-4763_amd64.deb
Selecting previously unselected package sdkmanager:amd64.
(Reading database ā€¦ 170045 files and directories currently installed.)
Preparing to unpack ā€¦/sdkmanager_0.9.13-4763_amd64.deb ā€¦
Unpacking sdkmanager:amd64 (0.9.13-4763) ā€¦
dpkg: dependency problems prevent configuration of sdkmanager:amd64:
sdkmanager:amd64 depends on locales.

dpkg: error processing package sdkmanager:amd64 (ā€“install):
dependency problems - leaving unconfigured
Processing triggers for desktop-file-utils (0.23-1ubuntu3.18.04.2) ā€¦
Processing triggers for mime-support (3.60ubuntu1) ā€¦
Processing triggers for hicolor-icon-theme (0.17-2) ā€¦
Errors were encountered while processing:
sdkmanager:amd64

Nvidia needs to get this working for i386.

Hi hallmanNet18, SDK Manager requires x86_64 distribution (Ubuntu 16.04 or Ubuntu 18.04). Please refer to the System Requirements here:

[url]https://docs.nvidia.com/sdk-manager/system-requirements/index.html[/url]

i386 refers to the 32-bit edition and amd64 (or x86_64) refers to the 64-bit edition for Intel and AMD processors.

try sudo apt-get update or better yet do a clean reinstall and then sudo apt-get update and try installing again.

The computer is 64 bit amd dual core 4450B. I was running lubuntu-18.04.2-desktop-i386 on a thumb drive because I read it would be lighter weight for old machines.

I tried every update possible on that and had no luck.

I am currently running lubuntu-18.04-desktop-amd64 and doing updates now.
I will let all know if that works.

Dear Sir,

When I source_sync.sh, and input tag : tegra-l4t-r32.2, it returns errors no such version to sync.

dennis@dennis:~/nvidia/nvidia_sdk/JetPack_4.2.1_Linux_GA_P2888/Linux_for_Tegra$ ./source_sync.sh 
Directory for kernel/kernel-4.9, /home/dennis/nvidia/nvidia_sdk/JetPack_4.2.1_Linux_GA_P2888/Linux_for_Tegra/sources/kernel/kernel-4.9, already exists!
remote: Enumerating objects: 5192157, done.
remote: Counting objects: 100% (5192157/5192157), done.
remote: Compressing objects: 100% (798604/798604), done.
fatal: Unable to read current working directory: No such file or directory
fatal: index-pack failed
error: Could not fetch origin
fatal: Unable to read current working directory: No such file or directory
./source_sync.sh: line 183: popd: /home/dennis/nvidia/nvidia_sdk/JetPack_4.2.1_Linux_GA_P2888/Linux_for_Tegra: No such file or directory
Please enter a tag to sync /home/dennis/nvidia/nvidia_sdk/JetPack_4.2.1_Linux_GA_P2888/Linux_for_Tegra/sources/kernel/kernel-4.9 source to
(enter nothing to skip): tegra-l4t-r32.2
./source_sync.sh: line 207: pushd: /home/dennis/nvidia/nvidia_sdk/JetPack_4.2.1_Linux_GA_P2888/Linux_for_Tegra/sources/kernel/kernel-4.9: No such file or directory
Couldn't find tag tegra-l4t-r32.2
/home/dennis/nvidia/nvidia_sdk/JetPack_4.2.1_Linux_GA_P2888/Linux_for_Tegra/sources/kernel/kernel-4.9 source sync to tag tegra-l4t-r32.2 failed!
./source_sync.sh: line 218: popd: /home/dennis/nvidia/nvidia_sdk/JetPack_4.2.1_Linux_GA_P2888/Linux_for_Tegra: No such file or directory


Downloading default kernel/nvgpu source...
Cloning into '/home/dennis/nvidia/nvidia_sdk/JetPack_4.2.1_Linux_GA_P2888/Linux_for_Tegra/sources/kernel/nvgpu'...
fatal: Unable to read current working directory: No such file or directory
1 Like

ā€œon a thumb driveā€ this means you boot your OS from a USB flash drive?
if so, in the documentation it is said it wont work from a usb drive, it has to be installed on a system drive dedicated to the operating system for it to work.

I did a new Ubuntu 18.04 install on a new SSD in my laptop and I got the SDKManager to install.

The next issue is the SDKManager is apparently a fixed window size and cannot be resized to fit the display on my laptop. Laptop display is 1366 x 768.

I have run out of monitors.

Facepalm

This is becoming a very painful software install compared to boards like the Raspberry Pi. Maybe the software developers need to revisit the RPi.

I tried loading ubuntu on the desktop but it was unstable.
I then loaded lubuntu-18.04-desktop-amd64.
When it tried to load SDK Manager it was still missing packages but I downloaded the missing packages and eventually got SDK maanger to run.
I was able to flash the Nano and Iā€™m up and running.

The only curiosity is that SDK manager only created a ~14gig file system on the Nano which is 92% full after loading and compiling the Hello AI World project. The actual SD card is 64gig.

[UPDATE] Temp fix: Saved the image as a taz, unzipped it, modified architecture meta, made a tarball, loaded back into an image. Inspired by https://stackoverflow.com/questions/42316614/how-can-i-edit-an-existing-docker-image-metadata. Image with correct arm64 architecture flags is on Docker Hub as runadastra/l4t-base:r32.2

The nvcr.io/nvidia/l4t-base:r32.2 image is built with the wrong architecture tag of amd64 instead of arm64. This is not an issue when directly running a container from the image on a Jetson such as:

$ docker run -ti nvcr.io/nvidia/l4t-base:r32.2 bash

However, when trying to deploy a stack on multiple Jetsons with a service utilizing the nvcr.io/nvidia/l4t-base:r32.2 image, the service reports unsupported architecture.

See below for the image details showing the wrong architecture flag of amd64. Please push an updated image with the correct arm64 architecture tag.

$ docker inspect nvcr.io/nvidia/l4t-base:r32.2
[ 
    {
        "Id": "sha256:e3c56b78e93f1d7b6bedaf1fb9146e71d3efad9dfdd1d5487c6a27d16ebe039b",
        "RepoTags": [
            "nvcr.io/nvidia/l4t-base:r32.2"
        ],
        "RepoDigests": [
            "nvcr.io/nvidia/l4t-base@sha256:56fdd1e0775441a2a10de0987efae9651b3054eab25abf11a87732f996da3d38"
        ],
        "Parent": "",
        "Comment": "",
        "Created": "2019-07-25T00:10:48.359397755Z",
        "Container": "",
        "ContainerConfig": {
            "Hostname": "",
            "Domainname": "",
            "User": "",
            "AttachStdin": false,
            "AttachStdout": false,
            "AttachStderr": false,
            "Tty": false,
            "OpenStdin": false,
            "StdinOnce": false,
            "Env": null,
            "Cmd": null,
            "Image": "",
            "Volumes": null,
            "WorkingDir": "",
            "Entrypoint": null,
            "OnBuild": null,
            "Labels": null
        },
        "DockerVersion": "",
        "Author": "",
        "Config": {
            "Hostname": "",
            "Domainname": "",
            "User": "",
            "AttachStdin": false,
            "AttachStdout": false,
            "AttachStderr": false,
            "Tty": false,
            "OpenStdin": false,
            "StdinOnce": false,
            "Env": [
                "PATH=/usr/local/cuda-10.0/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", 
                "LD_LIBRARY_PATH=/usr/local/cuda-10.0/targets/aarch64-linux/lib:",
                "NVIDIA_VISIBLE_DEVICES=all",
                "NVIDIA_DRIVER_CAPABILITIES=all"
            ],
            "Cmd": [
                "/bin/bash"
            ],
            "Image": "",
            "Volumes": null,
            "WorkingDir": "",
            "Entrypoint": null,
            "OnBuild": null,
            "Labels": null
        },
        "Architecture": "amd64",
        "Os": "linux",
        "Size": 1063372897,
        "VirtualSize": 1063372897,
        "GraphDriver": {
            "Data": {
                "LowerDir": "/var/lib/docker/overlay2/1ba72a7fc3c4fdd1f65235fd76cf5375bb787ded6f19f4e74cccc91d083e87a2/diff:/var/lib/docker/overlay2/65dd3442c68acca4e8206a6e5e8672a0f89a377b8a1c3f2cc5a5901f6e9cf352/diff:/var/lib/docker/overlay2/4f50def709e542e50101ea069c9e48f3269941e3295a39b79ca91b7b8b947f8f/diff:/var/lib/docker/overlay2/313d81d105c17dc46e023b38d8d5f600e8f2d4157a4dcbf0b41681c363f1ff1a/diff:/var/lib/docker/overlay2/2376b6269e965e456a5b774614bc5e54b6def048d0996e5365964ce8e0352c07/diff:/var/lib/docker/overlay2/64863b4faf9980173ed168ae65987bf466162ae611cbda4ec753e79fb8fc40eb/diff:/var/lib/docker/overlay2/ee8eb8ab02612f2904ce84b43d3721b544401633f4f04b73ce119c2d30924906/diff:/var/lib/docker/overlay2/4ada88f69325089b6d3772791f8fd67f1d5affa4e992f959905863d7528557e2/diff:/var/lib/docker/overlay2/e99d39ae68c09e4045631998ddb5e58e0262d376cba1ee4585bce754cd7f4b39/diff:/var/lib/docker/overlay2/f0468b0f8138fa7e9db97b6b63618969dfab3dcd810c206a0a7240e6de2c1509/diff:/var/lib/docker/overlay2/2647aaad97e621c1838f5ffae93492c8799c92d52f6ee0d2cf034ee12045125e/diff:/var/lib/docker/overlay2/f20b6506418fe1289c965d243cc1a842329c52b4eb70835e308dd223467a5383/diff:/var/lib/docker/overlay2/ecfb18b09b353aab52735eced8c4766c4dc56579a72b2ca4e06263a7c1b23ffe/diff:/var/lib/docker/overlay2/01ce6e242a627c4c3c5360cec8cf6b30b1912ffcd58094887ce1fb053506a57c/diff:/var/lib/docker/overlay2/7b6e7e874e6cd148bc387dd51788369ece9d25d61d1e767f1820759b8b7a3db6/diff",
                "MergedDir": "/var/lib/docker/overlay2/dbac51a5f1ff752b4d5851768150b904444931fdd8a1e33b1d1414cf9782adc0/merged",
                "UpperDir": "/var/lib/docker/overlay2/dbac51a5f1ff752b4d5851768150b904444931fdd8a1e33b1d1414cf9782adc0/diff",
                "WorkDir": "/var/lib/docker/overlay2/dbac51a5f1ff752b4d5851768150b904444931fdd8a1e33b1d1414cf9782adc0/work"
            },
            "Name": "overlay2"
        },
        "RootFS": {
            "Type": "layers",
            "Layers": [
                "sha256:b5626286212ffdbfa2433f6423eaffb0023faec33441b9f58bef36febd4e315c",
                "sha256:54842367e40a8bf38267dde4ce27f01180e9ce7900aec1b436c3a44baa2a582f",
                "sha256:01c890d3e0447dc45885d313d612095bd754eb4ab4cf21257ad407172d54d5d3",
                "sha256:5d1b522dd9ac7e646a5e309737541395b67a894fd22f5f4e4587f7d82ecb18dc",
                "sha256:4675d0712b8afd435b9ec7502cee1e73cfa86ac19350f066b5066aabfdbb5d4f",
                "sha256:3bcbe6785ed1e4209b37e4ceb2f7c8a06564d885f77eda722191f429a64788d5",
                "sha256:196dbdce1433d7a7f39261c5a97eebd19ebdf6b32f56c475a5977c2743520db5",
                "sha256:a7af9324aad390c3c7e5b7dba00c7f65a6b332dac6719921c2806796ea0b9f2e",
                "sha256:51f3091faa98e5b49f020582eb7fc47d3745cb9c912ed135e28d0eb58eae9bf9",
                "sha256:3dbf8d37965f50de4b355cd499821627f6d10be49fe91506f9ee68c1f0406c4a",
                "sha256:b52ecd06377306aef2d91620ff1acb53a87577246c248945797ad0a6f73baa5c",
                "sha256:ae0ac36682576aeb6b43101526b39d57fd9a89e1d35eac0fa8ca6daa7e3ee252",
                "sha256:34a39bb24c65397808ec1ee408c20d2f981dbf7a436733bbdaa1389392c62471",
                "sha256:85b0a77757959ca4d91b71fe04aa58fae8ad51c81ed7d934a1aceaea4f17b837",
                "sha256:15dd63b47c4bee62fe892ebfb95e4d50d853f849f0cd77238a7d94fee52cb0af",
                "sha256:c7dc12f049f65200e04640c4b1a19c4b6f6963feef68d80f2aec1f8cd40b3d94"
            ]
        },
        "Metadata": {
            "LastTagTime": "0001-01-01T00:00:00Z"
        }
    }
]

It says ā€œHeadless initial system configuration via the flashing USB port or UARTā€.

But how do I actually do this on a Jetson TX2?
I have flashed and rebooted it. How do I ssh to it? Is there an user account pre-created? Can I connect via Ethernet or the OTG cable? Is there documentation on all this somewhere?

Hi jonathanhtxp3, see here for the documentation on Headless Setup:

https://docs.nvidia.com/jetson/l4t/Tegra%20Linux%20Driver%20Package%20Development%20Guide/flashing.html#wwpID0E0DB0HA

The user account is created during these initial setup steps. For security reasons, the old ā€œnvidiaā€ account is not longer pre-created.