Orin Nano TensorRT installed but module not found when using YOLO

Hi, when converting a yolov8n.pt model to yolov8n.engine using YOLO library I get an error:

WARNING ⚠️ TensorRT requires GPU export, automatically assigning device=0
Ultralytics YOLOv8.2.70 🚀 Python-3.10.12 torch-2.3.0 CUDA:0 (Orin, 7620MiB)
YOLOv8n summary (fused): 168 layers, 3,151,904 parameters, 0 gradients, 8.7 GFLOPs

PyTorch: starting from 'yolov8n.pt' with input shape (8, 3, 640, 640) BCHW and output shape(s) (8, 84, 8400) (6.2 MB)

ONNX: starting export with onnx 1.16.1 opset 17...
ONNX: export success ✅ 9.2s, saved as 'yolov8n.onnx' (12.1 MB)
requirements: Ultralytics requirement ['tensorrt>7.0.0,<=10.1.0'] not found, attempting AutoUpdate...
  error: subprocess-exited-with-error
  
  × Getting requirements to build wheel did not run successfully.
  │ exit code: 1
  ╰─> [17 lines of output]
      Traceback (most recent call last):
        File "/home/jetson/Documents/yolo/my-venv/lib/python3.10/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 353, in <module>
          main()
        File "/home/jetson/Documents/yolo/my-venv/lib/python3.10/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 335, in main
          json_out['return_val'] = hook(**hook_input['kwargs'])
        File "/home/jetson/Documents/yolo/my-venv/lib/python3.10/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 118, in get_requires_for_build_wheel
          return hook(config_settings)
        File "/tmp/pip-build-env-1lk6p2wi/overlay/lib/python3.10/site-packages/setuptools/build_meta.py", line 327, in get_requires_for_build_wheel
          return self._get_build_requires(config_settings, requirements=[])
        File "/tmp/pip-build-env-1lk6p2wi/overlay/lib/python3.10/site-packages/setuptools/build_meta.py", line 297, in _get_build_requires
          self.run_setup()
        File "/tmp/pip-build-env-1lk6p2wi/overlay/lib/python3.10/site-packages/setuptools/build_meta.py", line 497, in run_setup
          super().run_setup(setup_script=setup_script)
        File "/tmp/pip-build-env-1lk6p2wi/overlay/lib/python3.10/site-packages/setuptools/build_meta.py", line 313, in run_setup
          exec(code, locals())
        File "<string>", line 67, in <module>
      RuntimeError: TensorRT currently only builds wheels for x86_64 processors
      [end of output]
  
  note: This error originates from a subprocess, and is likely not a problem with pip.

However when I run dpkg -l | grep TensorRT
I get:

ii  graphsurgeon-tf                              8.6.2.3-1+cuda12.2                                arm64        GraphSurgeon for TensorRT package
ii  libnvinfer-bin                               8.6.2.3-1+cuda12.2                                arm64        TensorRT binaries
ii  libnvinfer-dev                               8.6.2.3-1+cuda12.2                                arm64        TensorRT development libraries
ii  libnvinfer-dispatch-dev                      8.6.2.3-1+cuda12.2                                arm64        TensorRT development dispatch runtime libraries
ii  libnvinfer-dispatch8                         8.6.2.3-1+cuda12.2                                arm64        TensorRT dispatch runtime library
ii  libnvinfer-headers-dev                       8.6.2.3-1+cuda12.2                                arm64        TensorRT development headers
ii  libnvinfer-headers-plugin-dev                8.6.2.3-1+cuda12.2                                arm64        TensorRT plugin headers
ii  libnvinfer-lean-dev                          8.6.2.3-1+cuda12.2                                arm64        TensorRT lean runtime libraries
ii  libnvinfer-lean8                             8.6.2.3-1+cuda12.2                                arm64        TensorRT lean runtime library
ii  libnvinfer-plugin-dev                        8.6.2.3-1+cuda12.2                                arm64        TensorRT plugin libraries
ii  libnvinfer-plugin8                           8.6.2.3-1+cuda12.2                                arm64        TensorRT plugin libraries
ii  libnvinfer-samples                           8.6.2.3-1+cuda12.2                                all          TensorRT samples
ii  libnvinfer-vc-plugin-dev                     8.6.2.3-1+cuda12.2                                arm64        TensorRT vc-plugin library
ii  libnvinfer-vc-plugin8                        8.6.2.3-1+cuda12.2                                arm64        TensorRT vc-plugin library
ii  libnvinfer8                                  8.6.2.3-1+cuda12.2                                arm64        TensorRT runtime libraries
ii  libnvonnxparsers-dev                         8.6.2.3-1+cuda12.2                                arm64        TensorRT ONNX libraries
ii  libnvonnxparsers8                            8.6.2.3-1+cuda12.2                                arm64        TensorRT ONNX libraries
ii  libnvparsers-dev                             8.6.2.3-1+cuda12.2                                arm64        TensorRT parsers libraries
ii  libnvparsers8                                8.6.2.3-1+cuda12.2                                arm64        TensorRT parsers libraries
ii  nvidia-tensorrt                              6.0+b106                                          arm64        NVIDIA TensorRT Meta Package
ii  nvidia-tensorrt-dev                          6.0+b106                                          arm64        NVIDIA TensorRT dev Meta Package
ii  onnx-graphsurgeon                            8.6.2.3-1+cuda12.2                                arm64        ONNX GraphSurgeon for TensorRT package
ii  python3-libnvinfer                           8.6.2.3-1+cuda12.2                                arm64        Python 3 bindings for TensorRT standard runtime
ii  python3-libnvinfer-dev                       8.6.2.3-1+cuda12.2                                arm64        Python 3 development package for TensorRT standard runtime
ii  python3-libnvinfer-dispatch                  8.6.2.3-1+cuda12.2                                arm64        Python 3 bindings for TensorRT dispatch runtime
ii  python3-libnvinfer-lean                      8.6.2.3-1+cuda12.2                                arm64        Python 3 bindings for TensorRT lean runtime
ii  tensorrt                                     8.6.2.3-1+cuda12.2                                arm64        Meta package for TensorRT
ii  tensorrt-libs                                8.6.2.3-1+cuda12.2                                arm64        Meta package for TensorRT runtime libraries
ii  uff-converter-tf                             8.6.2.3-1+cuda12.2                                arm64        UFF converter for TensorRT package

indicating that I have all the necessary libraries already installed.

I try to run this example code:

from ultralytics import YOLO

model = YOLO("yolov8n.pt")
model.export(
    format="engine",
    dynamic=True,
    batch=8,
    workspace=4,
    int8=True,
    data="coco.yaml",
)

# Load the exported TensorRT INT8 model
model = YOLO("yolov8n.engine", task="detect")

# Run inference
result = model.predict("https://ultralytics.com/images/bus.jpg")

Hi,

Thanks for the feedback.
We will give it a try and provide more info to you.

Thanks.

sudo apt-get install tensorrt nvidia-tensorrt-dev python3-libnvinfer-dev

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.