JetPack 4.6 Production Release with L4T 32.6.1

Tks for your support. Well, I’m using the L4T 32.6.1 Jetson Pack image. I’m trying to build an app in C++ with Docker. I’ve seen that if I set the “default-runtime”: “nvidia” on the “/etc/docker/daemon.json”, this error is fixed. Whatever, now I’m getting the follow one:


Building wheels for collected packages: abc
  Building wheel for tracktorpy (PEP 517): started
  Running command /usr/bin/python3 /usr/local/lib/python3.6/dist-packages/pip/_vendor/pep517/in_process/_in_process.py build_wheel /tmp/tmplwc0zzi0
  running bdist_wheel
  running build
  running build_ext
  -- The CXX compiler identification is GNU 7.5.0
  -- The C compiler identification is GNU 7.5.0
  -- Detecting CXX compiler ABI info
  -- Detecting CXX compiler ABI info - done
  -- Check for working CXX compiler: /usr/bin/c++ - skipped
  -- Detecting CXX compile features
  -- Detecting CXX compile features - done
  -- Detecting C compiler ABI info
  -- Detecting C compiler ABI info - done
  -- Check for working C compiler: /usr/bin/cc - skipped
  -- Detecting C compile features
  -- Detecting C compile features - done
  -- Loading submodules
  -- Submodule update
  -- Adding hungarian
  -- pybind11 v2.6.1
  -- Found PythonInterp: /usr/bin/python3 (found version "3.6.9")
  -- Found PythonLibs: /usr/lib/aarch64-linux-gnu/libpython3.6m.so
  -- Performing Test HAS_FLTO
  -- Performing Test HAS_FLTO - Success
  -- Adding core
  -- Adding tracking
  -- Looking for pthread.h
  -- Looking for pthread.h - found
  -- Performing Test CMAKE_HAVE_LIBC_PTHREAD
  -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed
  -- Looking for pthread_create in pthreads
  -- Looking for pthread_create in pthreads - not found
  -- Looking for pthread_create in pthread
  -- Looking for pthread_create in pthread - found
  -- Found Threads: TRUE
  -- Found CUDA: /usr/local/cuda-10.2 (found suitable version "10.2", minimum required is "9.0")
  -- Found TBB: /usr/include (found version "2017.0")
  -- Adding Romain-Detector
  -- The CUDA compiler identification is NVIDIA 10.2.300
  -- Detecting CUDA compiler ABI info
  -- Detecting CUDA compiler ABI info - done
  -- Check for working CUDA compiler: /usr/local/cuda/bin/nvcc - skipped
  -- Detecting CUDA compile features
  -- Detecting CUDA compile features - done
  -- Found Boost: /prefix/include (found suitable version "1.68.0", minimum required is "1.59") found components: filesystem iostreams system regex
  -- Found OpenCV: /prefix (found suitable version "4.3.0", minimum required is "4.0") found components: core imgproc dnn
  -- Adding tracktor
  -- Found OpenCV: /prefix (found suitable version "4.3.0", minimum required is "4.0") found components: core
  -- Adding tracktorpy
  -- Found PythonInterp: /usr/bin/python3 (found suitable version "3.6.9", minimum required is "3")
  -- Found PythonLibs: /usr/lib/aarch64-linux-gnu/libpython3.6m.so (found suitable version "3.6.9", minimum required is "3")
  -- Configuring done
  -- Generating done
  -- Build files have been written to: /tmp/pip-req-build-yylq0q4n/build/temp.linux-aarch64-3.6
  [ 12%] Built target core
  [ 15%] Building CXX object modules/tracking/CMakeFiles/tracking.dir/src/baseTracking.cpp.o
  [ 18%] Building CUDA object modules/detection/CMakeFiles/detector.dir/src/chunk.cu.o
  /tmp/pip-req-build-yylq0q4n/modules/detection/include/detection/chunk.h(56): error: member function declared with "override" does not override a base class member

  /tmp/pip-req-build-yylq0q4n/modules/detection/include/detection/chunk.h(70): error: function "nvinfer1::IPluginV2IOExt::configurePlugin(const nvinfer1::Dims *, int32_t, const nvinfer1::Dims *, int32_t, const nvinfer1::DataType *, const nvinfer1::DataType *, const __nv_bool *, const __nv_bool *, nvinfer1::PluginFormat, int32_t)"
  /usr/include/aarch64-linux-gnu/NvInferRuntimeCommon.h(836): here is inaccessible

I noticed this line:

– Check for working CUDA compiler: /usr/local/cuda/bin/nvcc - skipped

It’s like the nvcc isn’t found. But I launched an docker container and I saw it there. My CMakelists.txt is setted as:

#-------------------------------------------------------------------------------

CUDA

#-------------------------------------------------------------------------------
set(CMAKE_CUDA_COMPILER “/usr/local/cuda/bin/nvcc”)
enable_language(CUDA)

find_package(CUDA 9.0 REQUIRED)
SET(CUDA_SEPARABLE_COMPILATION ON)
set(CUDA_NVCC_FLAGS ${CUDA_NVCC_FLAGS} --maxrregcount=32 --compiler-options ‘-fPIC’)
set(CMAKE_CUDA_STANDARD 14)
set(CMAKE_CUDA_STANDARD_REQUIRED TRUE)

Hi, @AdrianoSantosPB

It’s essential to add "default-runtime": "nvidia" to enable the nvcc access during docker build operations.
Since your app requires nvcc, it causes some errors if not update the /etc/docker/daemon.json file.

You can find more information in the below GitHub.

Thanks.

Hello again,

I am trying to install Triton inference server from source file (according to published this release: Releases · triton-inference-server/server · GitHub)

Below are my system information:
Nvidia Jetson Nano,
Ubuntu 18.04 LTS
Jetpack 4.6

I have download the file called tritonserver2.12.0-jetpack4.6.tgz file but, I am not be able to compile/install Triton Server.

What should I do with source code file (tar.gz file at the bottom)?

I have attached the image of tritonserver2.12.0-jetpack4.6.tgz file folder.

image

How can I compile/install and run triton inference server?

There is a problem on Jetson Nano - the latest Jetpack does NOT get installed when you use apt update and apt upgrade.

I just checked my version of L4T and it is dated October last year:

nano@jetson-nano:~$ cat /etc/nv_tegra_release
# R32 (release), REVISION: 4.4, GCID: 23942405, BOARD: t210ref, EABI: aarch64, DATE: Fri Oct 16 19:44:43 UTC 2020

@jetsonnvidia i hope you reffered to the instruciton here

https://docs.nvidia.com/jetson/jetpack/install-jetpack/index.html#package-management-tool

The first step is to upgrade the L4T. Please refer “To update to a new minor release” here:
https://docs.nvidia.com/jetson/l4t/index.html#page/Tegra%20Linux%20Driver%20Package%20Development%20Guide/updating_jetson_and_host.html#wwpID0E0AL0HA

It involves changing the apt source list to point to the 32.6.1 repo and then doing apt update and apt dist-upgrade

1 Like

Hi @skilic, you should use the file tritonserver2.12.0-jetpack4.6.tgz. All installation instructions for JetPack are written below Jetson Jetpack Support in the release note Release Release 2.12.0 corresponding to NGC container 21.07 · triton-inference-server/server · GitHub

To learn how to deploy your models, please reefer to the Quick Start Guide: https://github.com/triton-inference-server/server/blob/main/docs/quickstart.md

1 Like

Thanks, that fixed it. I forgot that you needed to edit the sources.lst.

My upgrade is broken. I made the changes for a minor release so now I have this in /etc/apt/sources.list.d/nvidia-l4t-apt-source.list:

deb https://repo.download.nvidia.com/jetson/common r32.6 main
deb https://repo.download.nvidia.com/jetson/t210 r32.6 main

When I try to do a dist-upgrade I get this:

nano@jetson-nano:~$ sudo apt dist-upgrade
Reading package lists... Done
Building dependency tree       
Reading state information... Done
You might want to run 'apt --fix-broken install' to correct these.
The following packages have unmet dependencies.
 cuda-command-line-tools-10-2 : Depends: cuda-nvprof-10-2 (>= 10.2.300) but it is not installed
E: Unmet dependencies. Try 'apt --fix-broken install' with no packages (or specify a solution).

When I try --fix-broken-install I get this:

nano@jetson-nano:~$ sudo apt --fix-broken install
Reading package lists... Done
Building dependency tree       
Reading state information... Done
Correcting dependencies... Done
The following additional packages will be installed:
  cuda-nvprof-10-2
The following NEW packages will be installed
  cuda-nvprof-10-2
0 to upgrade, 1 to newly install, 0 to remove and 61 not to upgrade.
22 not fully installed or removed.
Need to get 0 B/1,059 kB of archives.
After this operation, 4,807 kB of additional disk space will be used.
Do you want to continue? [Y/n] Y
debconf: Delaying package configuration, since apt-utils is not installed.
(Reading database ... 172633 files and directories currently installed.)
Preparing to unpack .../cuda-nvprof-10-2_10.2.300-1_arm64.deb ...
Unpacking cuda-nvprof-10-2 (10.2.300-1) ...
dpkg: error processing archive /var/cache/apt/archives/cuda-nvprof-10-2_10.2.300-1_arm64.deb (--unpack):
 trying to overwrite '/usr/local/cuda-10.2/targets/aarch64-linux/include/cudaProfiler.h', which is also in package cuda-misc-headers-10-2 10.2.89-1
dpkg-deb: error: paste subprocess was killed by signal (Broken pipe)
Errors were encountered while processing:
 /var/cache/apt/archives/cuda-nvprof-10-2_10.2.300-1_arm64.deb
E: Sub-process /usr/bin/dpkg returned an error code (1)

What is going on?

Update: fixed it by doing sudo dpkg -r --force-all cuda-misc-headers-10-2

1 Like

You may post your pipeline for this issue to be reproduced and hopefully a workaround to be found.

Hi @hlacik
Please make a new post with detail information so that we can check and suggest next.

Very happy about this. Thanks!

Any can please help me out? I’m having a problem with first boot of jetson nano 2GB

Try another SD card. Looks like that’s the source of the issue.

1 Like

do you plan in future your NGC containers for l4t deepstream and tensorflow to be based on l4t-cuda or l4t-tensorrt containers? is there a plan to make it similar ==normalize it as on x86_64 architecture, that you provide nvidia cuda containers and depend on them (==host os has only docker + nvidia gpu driver) ?

Yes, I just changed SD card from 64GB Samsung to 64GB HP card and it worked fine. Thank you!

@ShaneCCC i just downloaded them and write a driver for my camera based on imx219 camera. So I wrote dtsi and *.c and *.h, and modified some codes so that I can compile image file and all modules in a hostPC
I had following problems:

  1. I want to flash into an external SD card, but it failed.
  2. I copied Image and dts file into /boot folder, my Jetson Nano can boot but it can not detect my camera. In /Proc/device-tree I can not find my camera in the folder i2c@0 or i2c@1 etc like imx219.
    How can I debug to know if it tries to detect my camera by booting?

Thanks for your help

Hi,
has somebody tried to use VPI 1.1 in Python?
I cannot do import vpi on my 4GB development B02 nano.
libnvvpi1 is installed (libnvvpi1 is already the newest version (1.1.12).

Python 3.6.9 (default, Jan 26 2021, 15:33:00) 
[GCC 8.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import vpi
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'vpi'

BR Erich

@erich.voko is python3-vpi package installed? How did you install JetPack on your Nano. If using SDK Manager or SDCard image python3-vpi package should have been present.

@suhash Thanks, that lib was missing! Correct name is python3-vpi1
I installed an earlier version of JetPack with SD card image long ago, since then I “only” update/upgrade the running system because I develop on the nano and therefore have an “big” system with a lot of things installed. So it is to complicated to always start with a new empty system. Another problem is that my big notebook from the company is Ubuntu 20.04 so the SDK Manager is not running on it (but did not try in the last month, so maybe this is old information).

BR
Erich

I recently installed JetPack 4.6 on a NVIDIA Jetson Nano 2GB and have tried to use the MATLAB GPU coder to compile C and C++ code on the Jetson board. However, when connecting to the Jetson, MATLAB does not read a CUDA version at all.
After further investigation, in /usr/local/ there are folders for cuda, cuda-10 and cuda-10.2. However, when I use the command “nvcc --version” to check what version is installed it returns “bash: nvcc: command not found” as well the “nvidia-smi” command yields the same result.

When the command “cat /usr/local/cuda/version.txt” is entered, “CUDA Version 10.2.300” is returned.

I’m not sure what the discrepancy in these results is due to. Please let me know what you think.