Pytorch & torchversion compatible issue on L4T35.5.0

Hi,

Could you check if you are using the correct Ultralytics software?
There are some warnings in your log seem related to the incompatible software:

daniel@daniel-nvidia:~/Work$ yolo track model=yolov8n.engine source=../Videos/Worlds_longest_drone_fpv_one_shot.mp4
WARNING ⚠️ Python>=3.10 is required, but Python==3.8.10 is currently installed

We tested YOLO with JetPack 5 and it can work correctly (yolo predict with yolo11n).
Here are the detailed steps for your reference:

$ get https://developer.download.nvidia.cn/compute/redist/jp/v512/pytorch/torch-2.1.0a0+41361538.nv23.06-cp38-cp38-linux_aarch64.whl
$ pip3 install torch-2.1.0a0+41361538.nv23.06-cp38-cp38-linux_aarch64.whl 
$ git clone --branch v0.16.1 https://github.com/pytorch/vision torchvision
$ cd torchvision/
$ export BUILD_VERSION=0.16.1
$ sudo apt-get install libjpeg-dev zlib1g-dev libpython3-dev libopenblas-dev libavcodec-dev libavformat-dev libswscale-dev
$ sudo apt-get install python3-pip libopenblas-base libopenmpi-dev libomp-dev
$ python3 setup.py install --user
$ pip3 install ultralytics
$ yolo export model=yolo11n.pt format=engine  # creates 'yolo11n.engine'
WARNING ⚠️ TensorRT requires GPU export, automatically assigning device=0
Ultralytics 8.3.27 🚀 Python-3.8.10 torch-2.1.0a0+41361538.nv23.06 CUDA:0 (Xavier, 30991MiB)
YOLO11n summary (fused): 238 layers, 2,616,248 parameters, 0 gradients, 6.5 GFLOPs

PyTorch: starting from 'yolo11n.pt' with input shape (1, 3, 640, 640) BCHW and output shape(s) (1, 84, 8400) (5.4 MB)
...
[11/04/2024-07:28:05] [TRT] [I] [MemUsageChange] Init cuDNN: CPU +0, GPU +0, now: CPU 1597, GPU 10616 (MiB)
[11/04/2024-07:28:05] [TRT] [I] [MemUsageChange] TensorRT-managed allocation in building engine: CPU +0, GPU +17, now: CPU 0, GPU 17 (MiB)
TensorRT: export success ✅ 239.6s, saved as 'yolo11n.engine' (13.5 MB)

Export complete (245.1s)
Results saved to /home/nvidia/topic_310929
Predict:         yolo predict task=detect model=yolo11n.engine imgsz=640  
Validate:        yolo val task=detect model=yolo11n.engine imgsz=640 data=/usr/src/ultralytics/ultralytics/cfg/datasets/coco.yaml  
Visualize:       https://netron.app
💡 Learn more at https://docs.ultralytics.com/modes/export
$ yolo predict model=yolo11n.engine source='https://ultralytics.com/images/bus.jpg'
WARNING ⚠️ Unable to automatically guess model task, assuming 'task=detect'. Explicitly define task for your model, i.e. 'task=detect', 'segment', 'classify','pose' or 'obb'.
Ultralytics 8.3.27 🚀 Python-3.8.10 torch-2.1.0a0+41361538.nv23.06 CUDA:0 (Xavier, 30991MiB)
Loading yolo11n.engine for TensorRT inference...
[11/04/2024-07:28:54] [TRT] [I] Loaded engine size: 13 MiB
[11/04/2024-07:28:56] [TRT] [I] [MemUsageChange] Init cuDNN: CPU +343, GPU +324, now: CPU 690, GPU 8532 (MiB)
[11/04/2024-07:28:56] [TRT] [I] [MemUsageChange] TensorRT-managed allocation in engine deserialization: CPU +0, GPU +14, now: CPU 0, GPU 14 (MiB)
[11/04/2024-07:28:56] [TRT] [I] [MemUsageChange] Init cuDNN: CPU +0, GPU +0, now: CPU 677, GPU 8532 (MiB)
[11/04/2024-07:28:56] [TRT] [I] [MemUsageChange] TensorRT-managed allocation in IExecutionContext creation: CPU +0, GPU +20, now: CPU 0, GPU 34 (MiB)

Downloading https://ultralytics.com/images/bus.jpg to 'bus.jpg'...
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 134k/134k [00:00<00:00, 954kB/s]
image 1/1 /home/nvidia/topic_310929/bus.jpg: 640x640 4 persons, 1 bus, 10.3ms
Speed: 9.0ms preprocess, 10.3ms inference, 8.0ms postprocess per image at shape (1, 3, 640, 640)
Results saved to runs/detect/predict
💡 Learn more at https://docs.ultralytics.com/modes/predict

Thanks.