How to run yolo v11 inference on jetson orin using device=dla:0

I am facing compatibility issue with pytorch and cuda 12.2…

On jetson orin we have cuda 12.2 version and not able to download the compatible pytorch version to run yolov11 inference…

The issue i am facing is

(y11) orin@ubuntu:~/Desktop/yolo11$ yolo task=detect mode=predict model=yolo11s.pt source=‘/home/orin/Desktop/yolo11/h.mp4’ imgsz=720 device=0

Ultralytics 8.3.28 🚀 Python-3.10.12 torch-2.5.1

Traceback (most recent call last):

File “/home/orin/Desktop/yolo11/y11/bin/yolo”, line 8, in

sys.exit(entrypoint())

File “/home/orin/Desktop/yolo11/y11/lib/python3.10/site-packages/ultralytics/cfg/init.py”, line 966, in entrypoint

getattr(model, mode)(**overrides)  # default args from model

File “/home/orin/Desktop/yolo11/y11/lib/python3.10/site-packages/ultralytics/engine/model.py”, line 547, in predict

self.predictor.setup_model(model=self.model, verbose=is_cli)

File “/home/orin/Desktop/yolo11/y11/lib/python3.10/site-packages/ultralytics/engine/predictor.py”, line 306, in setup_model

device=select_device(self.args.device, verbose=verbose),

File “/home/orin/Desktop/yolo11/y11/lib/python3.10/site-packages/ultralytics/utils/torch_utils.py”, line 192, in select_device

raise ValueError(

ValueError: Invalid CUDA ‘device=0,1’ requested. Use ‘device=cpu’ or pass valid CUDA device(s) if available, i.e. ‘device=0’ or ‘device=0,1,2,3’ for Multi-GPU.

torch.cuda.is_available(): False

torch.cuda.device_count(): 0

os.environ[‘CUDA_VISIBLE_DEVICES’]: None

See Start Locally | PyTorch for up-to-date torch install instructions if no CUDA devices are seen by torch.

Hi,

Which Jetson Orin devkit and which JetPack SW you’re using?

Here are some suggestions for the common issues:

1. Performance

Please run the below command before benchmarking deep learning use case:

$ sudo nvpmodel -m 0
$ sudo jetson_clocks

2. Installation

Installation guide of deep learning frameworks on Jetson:

3. Tutorial

Startup deep learning tutorial:

4. Report issue

If these suggestions don’t help and you want to report an issue to us, please attach the model, command/step, and the customized app (if any) with us to reproduce locally.

Thanks!

Hi sangeeta.charantimath,
It looks like you’re using Jetpack 6.0.
Please uninstall torch and install our package as outlined in forums. Please note that we only support torch-2.1, torch-2.2, and torch-2.3 in Jetpack 6.0.

Thanks

i have a jetson orin device and want to run yolov11 on it. how to do that. i use jetpack 6.0 so can you give me some steps to achieve it.

Hi,

Please refer to the provided document to set up your virtual environment.
Then, download the forums provided package using pip install to install torch based on your CUDA version.

Thanks.

Hi,
I success runing yolo11 on device. Next, i want to connect CSI Camera to
perform realtime detection.
This code below is about open csi cam with opencv:

import cv2

camera_device = "/dev/video0"

# Open the camera stream
cap = cv2.VideoCapture(camera_device)

***** i run this code but seem it stuck in VideoCapture(). Nothing show up even an error

***** some solutions i found and try to check csi camera is avaiable.

  1. i run command nvgstcapture-1.0 and it work well. moreover i can capture a picture and save to disk
  2. i run command nvgstcapture-1.0 --mode=2 --automate --capture-aut. but it not recording video and throw error.
  3. i have already install v4l2 (Video4Linux2).
  4. My opencv version is 4.10.0

what i want is open camera in python and show up frame.

Hi,
it seems that your issue is unrelated to this topic. Please open a new topic for it.
In the meantime, have you tried the following command to check if your CSI camera is available?

v4l2-ctl --list-devices

The command should be

nvgstcapture-1.0 --mode=2 --automate --capture-auto

Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.