JETSON Orin Nano GPU error while using custom carrier board

While using Orin Nano Development board, we are able to load and execute the AI models from GPU. But when the same root filesystem is cloned back and applied to a custom carrier board, the AI model defaults to running on the CPU instead of the GPU.
Getting below error while loading AI models on our custom carrier board,

If we execute NVIDIA-smi command, getting below error while using custom carrier board,
"nvidia@nvidia-desktop:~$ nvidia-smi
Unable to determine the device handle for GPU0002:00:00.0: Unknown Error"

Is there any specific setting on SOM (Jetson Orin Nano)? We think the problem is SOM specific. When we replace production version SOM on Development board, then also GPU is not working. Earlier it was working on Development board with development version SOM.

Hi,
Do you use latest Jetpack 6.1 and hit the issue? Would like to know which Jetpack version you are using.

And please try to flash the sample rootfs to the custom board. You have mentioned the cloned image and would like to clarify whether it also occurs with the default sample rootfs.

Sample rootfs doesn’t contain CUDA, CUDnn, etc we have to download it which is not possible in custom board as it doesn’t have any network support.

We are using Jetpack 6.0

Hi,
For Jetpack 6.0 there is a known issue:

Jetson/L4T/r36.3.x patches - eLinux.org
[Flash] Backup script issue in JP6.0

You may apply the fix for a try. Or consider upgrade to 6.2. or 6.1.

It is working now… Thanks
We had re-created image and flashed

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.