Error while Installing Cuda-Toolkit

Please provide the following info (tick the boxes after creating this topic):
Software Version
[*] DRIVE OS 6.0.10.0
DRIVE OS 6.0.8.1
DRIVE OS 6.0.6
DRIVE OS 6.0.5
DRIVE OS 6.0.4 (rev. 1)
DRIVE OS 6.0.4 SDK
other

Target Operating System
[*] Linux
QNX
other

Hardware Platform
DRIVE AGX Orin Developer Kit (940-63710-0010-300)
DRIVE AGX Orin Developer Kit (940-63710-0010-200)
[*] DRIVE AGX Orin Developer Kit (940-63710-0010-100)
DRIVE AGX Orin Developer Kit (940-63710-0010-D00)
DRIVE AGX Orin Developer Kit (940-63710-0010-C00)
DRIVE AGX Orin Developer Kit (not sure its number)
other

SDK Manager Version
2.1.0
other

Host Machine Version
native Ubuntu Linux 20.04 Host installed with SDK Manager
[*] native Ubuntu Linux 20.04 Host installed with DRIVE OS Docker Containers
native Ubuntu Linux 18.04 Host installed with DRIVE OS Docker Containers
other

Issue Description
I am trying to validate the GPU is enabled or not on Nvidia Drive AGX Orin Platform for which I used below commands :
a) In Python environment - torch.cuda.is_available() which shows result as False
b) and check output of nvidia-smi but this bin is not present in the system.
Hence, I followed the instructions in the link CUDA Toolkit 12.6 Update 2 Downloads | NVIDIA Developer
and after performing sudo apt-get install -y cuda-drivers the system shows unmet dependencies and unable to remove any packages nor upgrade any packages. Also dpkg --configure -a fails.

What is the way to recover the system instead of flashing?
Attaching the complete error logs while updating/upgrading or removing any packages.
cuda_installation_error_logs.txt (60.0 KB)

If you want to test if the GPU is detected and no issue with GPU on Orin devkit, you can use deviceQuery .

nvidia@tegra-ubuntu:~$ cd /usr/local/cuda-11.4/samples/1_Utilities/deviceQuery
nvidia@tegra-ubuntu:/usr/local/cuda-11.4/samples/1_Utilities/deviceQuery$ ls
Makefile  NsightEclipse.xml  deviceQuery  deviceQuery.cpp  deviceQuery.o  readme.txt
nvidia@tegra-ubuntu:/usr/local/cuda-11.4/samples/1_Utilities/deviceQuery$ ./deviceQuery
./deviceQuery Starting...

 CUDA Device Query (Runtime API) version (CUDART static linking)

Detected 1 CUDA Capable device(s)

Device 0: "Orin"
  CUDA Driver Version / Runtime Version          12.1 / 11.4
  CUDA Capability Major/Minor version number:    8.7
  Total amount of global memory:                 28954 MBytes (30360248320 bytes)
  (016) Multiprocessors, (128) CUDA Cores/MP:    2048 CUDA Cores
  GPU Max Clock rate:                            1275 MHz (1.27 GHz)
  Memory Clock rate:                             1275 Mhz
  Memory Bus Width:                              256-bit
  L2 Cache Size:                                 4194304 bytes
  Maximum Texture Dimension Size (x,y,z)         1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384)
  Maximum Layered 1D Texture Size, (num) layers  1D=(32768), 2048 layers
  Maximum Layered 2D Texture Size, (num) layers  2D=(32768, 32768), 2048 layers
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       49152 bytes
  Total shared memory per multiprocessor:        167936 bytes
  Total number of registers available per block: 65536
  Warp size:                                     32
  Maximum number of threads per multiprocessor:  1536
  Maximum number of threads per block:           1024
  Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
  Max dimension size of a grid size    (x,y,z): (2147483647, 65535, 65535)
  Maximum memory pitch:                          2147483647 bytes
  Texture alignment:                             512 bytes
  Concurrent copy and kernel execution:          Yes with 2 copy engine(s)
  Run time limit on kernels:                     No
  Integrated GPU sharing Host Memory:            Yes
  Support host page-locked memory mapping:       Yes
  Alignment requirement for Surfaces:            Yes
  Device has ECC support:                        Disabled
  Device supports Unified Addressing (UVA):      Yes
  Device supports Managed Memory:                Yes
  Device supports Compute Preemption:            Yes
  Supports Cooperative Kernel Launch:            Yes
  Supports MultiDevice Co-op Kernel Launch:      Yes
  Device PCI Domain ID / Bus ID / location ID:   0 / 0 / 0
  Compute Mode:
     < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 12.1, CUDA Runtime Version = 11.4, NumDevs = 1
Result = PASS

The DRIVE OS 6.0.10 comes with CUDA 11.4 already and you are not expected to install CUDA toolkit or drivers on target.

Hi @SivaRamaKrishnaNV,

Just an update I have the package manager which was broken has been recovered by below commands :
a) sudo dpkg --force-all -P nvidia-compute-utils-560 nvidia-container-toolkit nvidia-driver-560 libnvidia-compute-560 libnvidia-container-tools libnvidia-gl-560 libnvidia-extra-560 libnvidia-decode-560 libnvidia-encode-560 nvidia-utils-560 libnvidia-cfg1-560 libnvidia-fbc1-560 libnvidia-cfg1-560 xserver-xorg-video-nvidia-560
b) sudo apt autoremove && sudo apt autoclean && sudo apt clean && sudo apt-get update && sudo apt upgrade
c) sudo dpkg --force-all -P cuda-drivers cuda-drivers-560 nvidia-driver-560 nvidia-driver-560-open nvidia-driver-560-server nvidia-driver-560-server-open
d) sudo dpkg --configure -a
e) sudo apt --fix-broken install && sudo apt autoremove

Thanks for sharing the inputs. Able to get the same output as shared. But as I do not find nvidia-smi utility on the system to check what can be done further. To get this utility only I tried to install cuda-toolkit.

Dear @PA_GN ,
nvidia-smi is not available on DRIVE. You can use tegrastats or nsys profiler to know GPU usage

Hi @SivaRamaKrishnaNV,

Currently in my Orin Setup I have multiple cuda versions available. How to configure to use a specific cuda version?

Also are there any steps to force an application to execute on GPU? I have an Yolo based application which reads the camera images in PNG format and do the annotation on top of that and convert again into PNG image. I am executing this Yolo application in conda environment. How to ensure to run this application on GPU?

Note that CUDA 11.4 installed on target is officially supported on DRIVE. TensorRT and DW are expected to use CUDA 11.4.

Are you using TensorRT or DW application or something else? Please provide details of the application in a new topic to avoid cluttering of issues in a single topic.

The original query in description seems resolved.

Hi @SivaRamaKrishnaNV

Something else.

Have raised separate query for this topic.
how-to-execute-external-applications-on-gpu

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.