CUDA and NVIDIA driver configuration ERROR for PyTorch

I’m using a Jetson Orin Nano and experiencing issues with Yolov5 to ONNX. Here is the context:

python export.py --weights runs/train/exp4/weights/last.pt --img 320 --batch 1 --device 0 --include onnx

Error:

rror in cpuinfo: prctl(PR_SVE_GET_VL) failed
export: data=data/coco128.yaml, weights=['runs/train/exp4/weights/last.pt'], imgsz=[320], batch_size=1, device=0, half=False, inplace=False, keras=False, optimize=False, int8=False, per_tensor=False, dynamic=False, simplify=False, mlmodel=False, opset=17, verbose=False, workspace=4, nms=False, agnostic_nms=False, topk_per_class=100, topk_all=100, iou_thres=0.45, conf_thres=0.25, include=['onnx']
Traceback (most recent call last):
  File "/home/onur/Desktop/projects/denemeV2/yolov5/export.py", line 1530, in <module>
    main(opt)
  File "/home/onur/Desktop/projects/denemeV2/yolov5/export.py", line 1525, in main
    run(**vars(opt))
  File "/home/onur/.local/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
  File "/home/onur/Desktop/projects/denemeV2/yolov5/export.py", line 1367, in run
    device = select_device(device)
  File "/home/onur/Desktop/projects/denemeV2/yolov5/utils/torch_utils.py", line 124, in select_device
    assert torch.cuda.is_available() and torch.cuda.device_count() >= len(
AssertionError: Invalid CUDA '--device 0' requested, use '--device cpu' or pass valid CUDA device(s)
  1. nvidia-smi Output:
Driver Version: N/A
CUDA Version: 12.2
GPU: Orin (nvgpu) - No running processes found.
  1. nvcc --version Output:
CUDA compilation tools, release 12.2, V12.2.140
  1. PyTorch Check:
import torch
print("CUDA available:", torch.cuda.is_available())
print("Number of CUDA devices:", torch.cuda.device_count())
print("Default CUDA device:", torch.cuda.get_device_name(0) if torch.cuda.is_available() else "None")
  • Result:
CUDA available: False
Number of CUDA devices: 0
Default CUDA device: None
  1. TegraStats Output:
GR3D_FREQ and temperatures seem normal, with CPU and GPU usage visible.

Problem

  • Even though nvcc reports a valid CUDA version and tegrastats shows GPU activity, PyTorch fails to detect the CUDA device.
  • Additionally, nvidia-smi does not display a valid driver version.

Steps Taken

  • Tried reinstalling NVIDIA drivers (nvidia-driver-540 not found).
  • Ran ubuntu-drivers devices, but it provided no output.

Request for Help

Hi,

Sorry for the late update.

You can find our prebuilt PyTorch/TorchVision/TorchAudio package in the below link:

There are much more DL related package available for the JetPack 6.1 users:
http://jetson.webredirect.org/jp6/cu126

Thanks.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.