No GPU detected on Jetson Orin Nano for object detection

Hello dear community,

I need your help to set up an NVIDIA Jetson Orin Nano Developer Kit for object detection. I have a script with which object detection basically works, but unfortunately only on the CPU. And therefore unfortunately much too slow. My model is a pre-trained mobilenetV2 model.

I followed this instruction: https://www.jetson-ai-lab.com/initial_setup_jon.html#6-boot-with-jetpack-6x-sd-card

I use a microSD card and have also installed an SSD.

Here you can find my code:

import cv2
import numpy as np
import tensorflow as tf
import time
import os
import shutil
from watchdog.observers import Observer
from watchdog.events import FileSystemEventHandler

import subprocess
cuda_version = subprocess.run(['nvcc', '--version'], capture_output=True, text=True)
print('cuda_version: ' + str(cuda_version.stdout))

import ctypes
cuDNN_version = ctypes.CDLL('libcudnn.so').cudnnGetVersion()
print('cuDNN_version: ' + str(cuDNN_version))

print('tf_version: ' + str(tf.__version__))

gpus = tf.config.experimental.list_physical_devices('GPU')
if gpus:
    try:
        tf.config.experimental.set_memory_growth(gpus[0], True)
        tf.config.experimental.set_visible_devices(gpus[0], 'GPU')
        print("GPU will be used.")
    except RuntimeError as e:
        print(e)
else:
    print("No GPU available.")

script_directory = os.path.dirname(os.path.abspath(__file__))
model_directory = os.path.join(script_directory, 'GPU_Model/saved_model')
model = tf.saved_model.load(model_directory)

and this is what I get as a result:

cuda_version: nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2024 NVIDIA Corporation
Built on Wed_Aug_14_10:14:07_PDT_2024
Cuda compilation tools, release 12.6, V12.6.68
Build cuda_12.6.r12.6/compiler.34714021_0
cuDNN_version: 90300
tf_version: 2.18.0
No GPU available.

And I added following lines to ~/.bashrc

export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH
export PATH=/usr/local/cuda/bin:$PATH

This is my system information:
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 22.04.5 LTS
Release: 22.04
Codename: jammy

Do you have any idea why the GPU (and CUDA?) is not used for tensorflow?

Thank you in advance!
Best regards
volume1

Dear @volume1 ,
Could you check CUDA devicequery sample to confirm if GPU is detected with out any issue?

Dear @SivaRamaKrishnaNV ,

Thank you for your feedback.
Okay, then I must have done something wrong during installation. To be honest, I first had to search for what “devicequery” is.
I then found here that you should run the make command after the CURA installation…
https://forums.developer.nvidia.com/t/how-to-run-devicequery/54624

On the other hand, I have read that CUDA comes along with the installation of the operating system for Jetson (!?)
However, the make command failed for me. Even though I can at least detect a CUDA directory on the device. Here is the log:


> Package glfw3 was not found in the pkg-config search path.
> Perhaps you should add the directory containing `glfw3.pc'
> to the PKG_CONFIG_PATH environment variable
> No package 'glfw3' found
> vulkanImageCUDA.cu:37:10: fatal error: GLFW/glfw3.h: No such file or directory
>    37 | #include <GLFW/glfw3.h>
>       |          ^~~~~~~~~~~~~~
> compilation terminated.
> vulkanImageCUDA.cu:37:10: fatal error: GLFW/glfw3.h: No such file or directory
>    37 | #include <GLFW/glfw3.h>
>       |          ^~~~~~~~~~~~~~
> compilation terminated.
> vulkanImageCUDA.cu:37:10: fatal error: GLFW/glfw3.h: No such file or directory
>    37 | #include <GLFW/glfw3.h>
>       |          ^~~~~~~~~~~~~~
> compilation terminated.
> vulkanImageCUDA.cu:37:10: fatal error: GLFW/glfw3.h: No such file or directory
>    37 | #include <GLFW/glfw3.h>
>       |          ^~~~~~~~~~~~~~
> compilation terminated.
> vulkanImageCUDA.cu:37:10: fatal error: GLFW/glfw3.h: No such file or directory
>    37 | #include <GLFW/glfw3.h>
>       |          ^~~~~~~~~~~~~~
> compilation terminated.
> vulkanImageCUDA.cu:37:10: fatal error: GLFW/glfw3.h: No such file or directory
>    37 | #include <GLFW/glfw3.h>
>       |          ^~~~~~~~~~~~~~
> compilation terminated.
> make[1]: *** [Makefile:403: vulkanImageCUDA.o] Error 255
> make[1]: Leaving directory '/home/user/cuda-samples/Samples/5_Domain_Specific/vulkanImageCUDA'
> make: *** [Makefile:45: Samples/5_Domain_Specific/vulkanImageCUDA/Makefile.ph_build] Error 2

Is there now a clear procedure how I can set up the Jetson Orin Nano with CUDA so that I can leave an object detection model on it? Thank you very much for your feedback and great support!
Best regards,
volume1

Can you try make in /home/user/cuda-samples/Samples/1_Utilities/deviceQuery and run deviceQuery to confirm if GPU is detected. Also, try nvidia-smi command.

May I know how did you setup TF on target?

Dear @SivaRamaKrishnaNV ,

thank you very much for your reply!
Unfortunately the make command does not work for me:

make: Nothing to be done for 'all'.

and

nvidia-smi

has returned the following

Tue Jan 28 08:34:54 2025       
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 540.4.0                Driver Version: 540.4.0      CUDA Version: 12.6     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  Orin (nvgpu)                  N/A  | N/A              N/A |                  N/A |
| N/A   N/A  N/A               N/A /  N/A | Not Supported        |     N/A          N/A |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+
                                                                                         
+---------------------------------------------------------------------------------------+
| Processes:                                                                            |
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |
|        ID   ID                                                             Usage      |
|=======================================================================================|
|  No running processes found                                                           |
+---------------------------------------------------------------------------------------+

To be honest, after the countless attempts at installation, I no longer have the original path to the TF installation.
If necessary, I can do a new installation (if there is a tutorial which works for NVIDIA Jetson Orin Nano).

One of the page I tried to follow was following:
https://developer.nvidia.com/cuda-downloads?target_os=Linux&target_arch=aarch64-jetson&Compilation=Native&Distribution=Ubuntu&target_version=22.04&target_type=deb_local
and
https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html#recommended-post%5B/url%5D

By the way:

lspci | grep -i nvidia

showed

0001:00:00.0 PCI bridge: NVIDIA Corporation Device 229e (rev a1)
0004:00:00.0 PCI bridge: NVIDIA Corporation Device 229c (rev a1)
0008:00:00.0 PCI bridge: NVIDIA Corporation Device 229c (rev a1)

This means, that there is a device which is suitable for CUDA, correct?

If necessary we can also have a short call to explain you the problem a little bit better?!

Best regards and thank you very much for your support!

Dear @volume1 ,
i installed TF and tested like below in Jetpack 6.2.

$ pip3 install --extra-index-url https://developer.download.nvidia.com/compute/redist/jp/v61 tensorflow==2.16.1+nv24.08
>>> import tensorflow as tf
>>> import numpy as np
>>> print('tf_version: ' + str(tf.__version__))
tf_version: 2.16.1
>>> gpus = tf.config.experimental.list_physical_devices('GPU')
>>> print(gpus)
[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]

Please see TF installation steps at Installing TensorFlow for Jetson Platform - NVIDIA Docs . Please change command as per the jetpack version.

Dear @SivaRamaKrishnaNV ,

wow thank you! That helped me to make some progress.
Now, when I am running my script, I get two warnings:

/usr/lib/python3/dist-packages/scipy/__init__.py:146: UserWarning: A NumPy version >=1.17.3 and <1.25.0 is required for this version of SciPy (detected version 1.26.4
  warnings.warn(f"A NumPy version >={np_minversion} and <{np_maxversion}"

and one for

2025-01-28 12:09:09.576921: I external/local_xla/xla/stream_executor/cuda/cuda_executor.cc:984] could not open file to read NUMA node: /sys/bus/pci/devices/0000:00:00.0/numa_node
Your kernel may have been built without NUMA support.

is it critical?
But at least it now looks like the GPU is being recognized and used.

Best regards

Np. you can continue with same. file a new topic in case of any other issues