Unable to Connect CUDA with YOLOv8 on Jetson Xavier AGX

Hello NVIDIA Community,

I am currently experiencing issues with connecting CUDA to my object detection code using YOLOv8 on my Jetson Xavier AGX. Below are the specifics of my setup:

JetPack Version: 5.1.4
CUDA Version: 11.4
Python Version: 3.8
Installed PyTorch Version: 0.19.1

Description of the Issue:

Despite following the installation steps for setting up PyTorch with CUDA support, I am unable to get CUDA recognized in my code, even when opencv with cuda in jtop is said to be YES . When I run the following checks in Python:

python

import torch
print(“CUDA available:”, torch.cuda.is_available())
print(“CUDA device count:”, torch.cuda.device_count())
if torch.cuda.is_available():
print(“Current CUDA device:”, torch.cuda.current_device())
print(“CUDA device name:”, torch.cuda.get_device_name(0))

The output indicates that CUDA is not available:

CUDA available: False
CUDA device count: 0

Additionally, I encountered the following error when trying to run my YOLOv8 object detection code, which relies on CUDA for efficient processing:

AssertionError: Torch not compiled with CUDA enabled

Hi,

Which PyTorch do you install?
It’s recommended to use our prebuilt package which has enabled the GPU support.

Thanks.

Hello,

I followed the steps provided for installing PyTorch on my Jetson Xavier AGX and successfully ran a small Python script to check the installed versions. Here are the results:

  • PyTorch version: 2.0.1
  • TorchVision version: 0.15.2
  • TorchAudio version: 2.0.2

I expected that CUDA would be available for use after this installation. However, to my surprise, CUDA is still not available. Here’s what I checked in the Python interactive shell:

import torch
print(torch.cuda.is_available())

The output returned is False, indicating that CUDA is not recognized:

$ python3
Python 3.8.10 (default, Sep 11 2024, 16:02:53)
[GCC 9.4.0] on linux
>>> import torch
>>> print(torch.cuda.is_available())
False

Details of My Setup:

  • JetPack Version: 5.1.4
  • CUDA Version: 11.4
  • Python Version: 3.8

Thank you ion version: 0.15.2
TorchAudio version: 2.0.2

Now, I thought that cuda will be available but to my suprise it still isn’t

$ python3
Python 3.8.10 (default, Sep 11 2024, 16:02:53)
[GCC 9.4.0] on linux
Type “help”, “copyright”, “credits” or “license” for more information.

import torch
print(torch.cuda.is_available())
False

After surfing the internet i downloaded again my torch version
$ python3 -m pip install --no-cache-dir torch-2.0.0+nv23.05-cp38-cp38-linux_aarch64.whl

To my suprise this version downloaded and its showing me CUDA is available with the name but the problem comes when I’m executing my code of object detection where its showing me this

python3 test2.py

CUDA is available. Using GPU for inference.

/home/esl/.local/lib/python3.8/site-packages/torchvision/io/image.py:13: UserWarning: Failed to load image Python extension:
warn(f"Failed to load image Python extension: {e}")
Traceback (most recent call last):
File “test2.py”, line 42, in
results = model.predict(source=rgb_frame, device=device)
File “/home/esl/.local/lib/python3.8/site-packages/ultralytics/engine/model.py”, line 554, in predict
return self.predictor.predict_cli(source=source) if is_cli else self.predictor(source=source, stream=stream)
File “/home/esl/.local/lib/python3.8/site-packages/ultralytics/engine/predictor.py”, line 168, in call
return list(self.stream_inference(source, model, *args, **kwargs)) # merge list of Result into one
File “/home/esl/.local/lib/python3.8/site-packages/torch/utils/_contextlib.py”, line 35, in generator_context
response = gen.send(None)
File “/home/esl/.local/lib/python3.8/site-packages/ultralytics/engine/predictor.py”, line 261, in stream_inference
self.results = self.postprocess(preds, im, im0s)
File “/home/esl/.local/lib/python3.8/site-packages/ultralytics/models/yolo/detect/predict.py”, line 25, in postprocess
preds = ops.non_max_suppression(
File “/home/esl/.local/lib/python3.8/site-packages/ultralytics/utils/ops.py”, line 292, in non_max_suppression
i = torchvision.ops.nms(boxes, scores, iou_thres) # NMS
File “/home/esl/.local/lib/python3.8/site-packages/torchvision/ops/boxes.py”, line 40, in nms
_assert_has_ops()
File “/home/esl/.local/lib/python3.8/site-packages/torchvision/extension.py”, line 48, in _assert_has_ops
raise RuntimeError(
RuntimeError: Couldn’t load custom C++ ops. This can happen if your PyTorch and torchvision versions are incompatible, or if you had errors while compiling torchvision from source. For further information on the compatible versions, check GitHub - pytorch/vision: Datasets, Transforms and Models specific to Computer Vision for the compatibility matrix. Please check your PyTorch version with torch.version and your torchvision version with torchvision.version and verify if they are compatible, and if not please reinstall torchvision so that it matches your PyTorch install.

So, I thought to download the torchvision file for the same version

wget https://developer.download.nvidia.com/compute/redist/jp/v511/pytorch/torchvision-0.15.2+nv23.05-cp38-cp38-linux_aarch64.whl
–2024-10-04 14:51:41-- https://developer.download.nvidia.com/compute/redist/jp/v511/pytorch/torchvision-0.15.2+nv23.05-cp38-cp38-linux_aarch64.whl
Resolving developer.download.nvidia.com (developer.download.nvidia.com)… 152.199.39.144
Connecting to developer.download.nvidia.com (developer.download.nvidia.com)|152.199.39.144|:443… connected.
HTTP request sent, awaiting response… 404 Not Found
2024-10-04 14:51:42 ERROR 404: Not Found.

Can you help me in this thing

Hi

You will need to build TorchVision from the source.
Please find the instructions below:

Instructions → Installation → torchvision:

Thanks.

Thanks for the help.
I can now successfully use CUDA.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.