Tks for your support. Well, I’m using the L4T 32.6.1 Jetson Pack image. I’m trying to build an app in C++ with Docker. I’ve seen that if I set the “default-runtime”: “nvidia” on the “/etc/docker/daemon.json”, this error is fixed. Whatever, now I’m getting the follow one:
Building wheels for collected packages: abc
Building wheel for tracktorpy (PEP 517): started
Running command /usr/bin/python3 /usr/local/lib/python3.6/dist-packages/pip/_vendor/pep517/in_process/_in_process.py build_wheel /tmp/tmplwc0zzi0
running bdist_wheel
running build
running build_ext
-- The CXX compiler identification is GNU 7.5.0
-- The C compiler identification is GNU 7.5.0
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/c++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /usr/bin/cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Loading submodules
-- Submodule update
-- Adding hungarian
-- pybind11 v2.6.1
-- Found PythonInterp: /usr/bin/python3 (found version "3.6.9")
-- Found PythonLibs: /usr/lib/aarch64-linux-gnu/libpython3.6m.so
-- Performing Test HAS_FLTO
-- Performing Test HAS_FLTO - Success
-- Adding core
-- Adding tracking
-- Looking for pthread.h
-- Looking for pthread.h - found
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed
-- Looking for pthread_create in pthreads
-- Looking for pthread_create in pthreads - not found
-- Looking for pthread_create in pthread
-- Looking for pthread_create in pthread - found
-- Found Threads: TRUE
-- Found CUDA: /usr/local/cuda-10.2 (found suitable version "10.2", minimum required is "9.0")
-- Found TBB: /usr/include (found version "2017.0")
-- Adding Romain-Detector
-- The CUDA compiler identification is NVIDIA 10.2.300
-- Detecting CUDA compiler ABI info
-- Detecting CUDA compiler ABI info - done
-- Check for working CUDA compiler: /usr/local/cuda/bin/nvcc - skipped
-- Detecting CUDA compile features
-- Detecting CUDA compile features - done
-- Found Boost: /prefix/include (found suitable version "1.68.0", minimum required is "1.59") found components: filesystem iostreams system regex
-- Found OpenCV: /prefix (found suitable version "4.3.0", minimum required is "4.0") found components: core imgproc dnn
-- Adding tracktor
-- Found OpenCV: /prefix (found suitable version "4.3.0", minimum required is "4.0") found components: core
-- Adding tracktorpy
-- Found PythonInterp: /usr/bin/python3 (found suitable version "3.6.9", minimum required is "3")
-- Found PythonLibs: /usr/lib/aarch64-linux-gnu/libpython3.6m.so (found suitable version "3.6.9", minimum required is "3")
-- Configuring done
-- Generating done
-- Build files have been written to: /tmp/pip-req-build-yylq0q4n/build/temp.linux-aarch64-3.6
[ 12%] Built target core
[ 15%] Building CXX object modules/tracking/CMakeFiles/tracking.dir/src/baseTracking.cpp.o
[ 18%] Building CUDA object modules/detection/CMakeFiles/detector.dir/src/chunk.cu.o
/tmp/pip-req-build-yylq0q4n/modules/detection/include/detection/chunk.h(56): error: member function declared with "override" does not override a base class member
/tmp/pip-req-build-yylq0q4n/modules/detection/include/detection/chunk.h(70): error: function "nvinfer1::IPluginV2IOExt::configurePlugin(const nvinfer1::Dims *, int32_t, const nvinfer1::Dims *, int32_t, const nvinfer1::DataType *, const nvinfer1::DataType *, const __nv_bool *, const __nv_bool *, nvinfer1::PluginFormat, int32_t)"
/usr/include/aarch64-linux-gnu/NvInferRuntimeCommon.h(836): here is inaccessible
I noticed this line:
– Check for working CUDA compiler: /usr/local/cuda/bin/nvcc - skipped
It’s like the nvcc isn’t found. But I launched an docker container and I saw it there. My CMakelists.txt is setted as:
It’s essential to add "default-runtime": "nvidia" to enable the nvcc access during docker build operations.
Since your app requires nvcc, it causes some errors if not update the /etc/docker/daemon.json file.
You can find more information in the below GitHub.
My upgrade is broken. I made the changes for a minor release so now I have this in /etc/apt/sources.list.d/nvidia-l4t-apt-source.list:
deb https://repo.download.nvidia.com/jetson/common r32.6 main
deb https://repo.download.nvidia.com/jetson/t210 r32.6 main
When I try to do a dist-upgrade I get this:
nano@jetson-nano:~$ sudo apt dist-upgrade
Reading package lists... Done
Building dependency tree
Reading state information... Done
You might want to run 'apt --fix-broken install' to correct these.
The following packages have unmet dependencies.
cuda-command-line-tools-10-2 : Depends: cuda-nvprof-10-2 (>= 10.2.300) but it is not installed
E: Unmet dependencies. Try 'apt --fix-broken install' with no packages (or specify a solution).
When I try --fix-broken-install I get this:
nano@jetson-nano:~$ sudo apt --fix-broken install
Reading package lists... Done
Building dependency tree
Reading state information... Done
Correcting dependencies... Done
The following additional packages will be installed:
cuda-nvprof-10-2
The following NEW packages will be installed
cuda-nvprof-10-2
0 to upgrade, 1 to newly install, 0 to remove and 61 not to upgrade.
22 not fully installed or removed.
Need to get 0 B/1,059 kB of archives.
After this operation, 4,807 kB of additional disk space will be used.
Do you want to continue? [Y/n] Y
debconf: Delaying package configuration, since apt-utils is not installed.
(Reading database ... 172633 files and directories currently installed.)
Preparing to unpack .../cuda-nvprof-10-2_10.2.300-1_arm64.deb ...
Unpacking cuda-nvprof-10-2 (10.2.300-1) ...
dpkg: error processing archive /var/cache/apt/archives/cuda-nvprof-10-2_10.2.300-1_arm64.deb (--unpack):
trying to overwrite '/usr/local/cuda-10.2/targets/aarch64-linux/include/cudaProfiler.h', which is also in package cuda-misc-headers-10-2 10.2.89-1
dpkg-deb: error: paste subprocess was killed by signal (Broken pipe)
Errors were encountered while processing:
/var/cache/apt/archives/cuda-nvprof-10-2_10.2.300-1_arm64.deb
E: Sub-process /usr/bin/dpkg returned an error code (1)
What is going on?
Update: fixed it by doing sudo dpkg -r --force-all cuda-misc-headers-10-2
do you plan in future your NGC containers for l4t deepstream and tensorflow to be based on l4t-cuda or l4t-tensorrt containers? is there a plan to make it similar ==normalize it as on x86_64 architecture, that you provide nvidia cuda containers and depend on them (==host os has only docker + nvidia gpu driver) ?
@ShaneCCC i just downloaded them and write a driver for my camera based on imx219 camera. So I wrote dtsi and *.c and *.h, and modified some codes so that I can compile image file and all modules in a hostPC
I had following problems:
I want to flash into an external SD card, but it failed.
I copied Image and dts file into /boot folder, my Jetson Nano can boot but it can not detect my camera. In /Proc/device-tree I can not find my camera in the folder i2c@0 or i2c@1 etc like imx219.
How can I debug to know if it tries to detect my camera by booting?
Hi,
has somebody tried to use VPI 1.1 in Python?
I cannot do import vpi on my 4GB development B02 nano.
libnvvpi1 is installed (libnvvpi1 is already the newest version (1.1.12).
Python 3.6.9 (default, Jan 26 2021, 15:33:00)
[GCC 8.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import vpi
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'vpi'
@erich.voko is python3-vpi package installed? How did you install JetPack on your Nano. If using SDK Manager or SDCard image python3-vpi package should have been present.
@suhash Thanks, that lib was missing! Correct name is python3-vpi1
I installed an earlier version of JetPack with SD card image long ago, since then I “only” update/upgrade the running system because I develop on the nano and therefore have an “big” system with a lot of things installed. So it is to complicated to always start with a new empty system. Another problem is that my big notebook from the company is Ubuntu 20.04 so the SDK Manager is not running on it (but did not try in the last month, so maybe this is old information).
I recently installed JetPack 4.6 on a NVIDIA Jetson Nano 2GB and have tried to use the MATLAB GPU coder to compile C and C++ code on the Jetson board. However, when connecting to the Jetson, MATLAB does not read a CUDA version at all.
After further investigation, in /usr/local/ there are folders for cuda, cuda-10 and cuda-10.2. However, when I use the command “nvcc --version” to check what version is installed it returns “bash: nvcc: command not found” as well the “nvidia-smi” command yields the same result.
When the command “cat /usr/local/cuda/version.txt” is entered, “CUDA Version 10.2.300” is returned.
I’m not sure what the discrepancy in these results is due to. Please let me know what you think.