Container image for YoloV8 with latest JP6

Hi! I’m building a new container image including yolov8, and I want it to run with TensorRT using the latest Jetpack 6 from NVIDIA. I’m having some dependency nightmare here. Running the build this is what I’m getting:

[1/2] STEP 2/7: RUN pip install --upgrade pip
--> Using cache 5754d1b51417a0ad741bb1ac9bc7211c8bf84a67bad0583c8be85bb8b2f197c2
--> 5754d1b51417
[1/2] STEP 3/7: RUN apt upgrade -y
--> Using cache 676fb7d306d776a7c99afa72a8a5681624c25419d0cc0d5f91b1d6441fbfc55b
--> 676fb7d306d7
[1/2] STEP 4/7: RUN pip install ultralytics protobuf grpcio-tools
--> Using cache 292888c5c620c96dc863c6510eb9ec1d857a8a9c722720f7ebcb9ca7e44d0192
--> 292888c5c620
[1/2] STEP 5/7: COPY ./protobuf /protobuf
--> Using cache e2bf37a8a67d3e716ec1183f71a936147f4ef9be42f0761688a512b31cc43362
--> e2bf37a8a67d
[1/2] STEP 6/7: RUN cd /protobuf/ && /usr/bin/python3 -m grpc_tools.protoc -I./ --python_out=. --pyi_out=. --grpc_python_out=. ./yoloserving.proto
--> Using cache f9017b3e53bcbc351f9c75052d341eddd9b82ebc607a3c84fe8970070b021a74
--> f9017b3e53bc
[1/2] STEP 7/7: RUN cd / && yolo export model=yolov8n.pt format=engine device=0
Downloading https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8n.pt to 'yolov8n.pt'...
100%|██████████| 6.23M/6.23M [00:00<00:00, 66.0MB/s]
Ultralytics YOLOv8.2.26 🚀 Python-3.10.12 torch-2.1.0 CUDA:0 (Orin, 62511MiB)
YOLOv8n summary (fused): 168 layers, 3151904 parameters, 0 gradients, 8.7 GFLOPs

PyTorch: starting from 'yolov8n.pt' with input shape (1, 3, 640, 640) BCHW and output shape(s) (1, 84, 8400) (6.2 MB)
requirements: Ultralytics requirement ['onnxsim>=0.4.33'] not found, attempting AutoUpdate...
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
Collecting onnxsim>=0.4.33
  Downloading onnxsim-0.4.36.tar.gz (21.0 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 21.0/21.0 MB 66.3 MB/s eta 0:00:00
  Preparing metadata (setup.py): started
  Preparing metadata (setup.py): finished with status 'done'
Requirement already satisfied: onnx in /usr/local/lib/python3.10/dist-packages (from onnxsim>=0.4.33) (1.16.0)
Collecting rich (from onnxsim>=0.4.33)
  Downloading rich-13.7.1-py3-none-any.whl.metadata (18 kB)
Requirement already satisfied: numpy in /usr/local/lib/python3.10/dist-packages (from onnx->onnxsim>=0.4.33) (1.26.1)
Requirement already satisfied: protobuf>=3.20.2 in /usr/local/lib/python3.10/dist-packages (from onnx->onnxsim>=0.4.33) (5.27.0)
Collecting markdown-it-py>=2.2.0 (from rich->onnxsim>=0.4.33)
  Downloading markdown_it_py-3.0.0-py3-none-any.whl.metadata (6.9 kB)
Collecting pygments<3.0.0,>=2.13.0 (from rich->onnxsim>=0.4.33)
  Downloading pygments-2.18.0-py3-none-any.whl.metadata (2.5 kB)
Collecting mdurl~=0.1 (from markdown-it-py>=2.2.0->rich->onnxsim>=0.4.33)
  Downloading mdurl-0.1.2-py3-none-any.whl.metadata (1.6 kB)
Downloading rich-13.7.1-py3-none-any.whl (240 kB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 240.7/240.7 kB 122.7 MB/s eta 0:00:00
Downloading markdown_it_py-3.0.0-py3-none-any.whl (87 kB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 87.5/87.5 kB 107.3 MB/s eta 0:00:00
Downloading pygments-2.18.0-py3-none-any.whl (1.2 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.2/1.2 MB 93.0 MB/s eta 0:00:00
Downloading mdurl-0.1.2-py3-none-any.whl (10.0 kB)
Building wheels for collected packages: onnxsim
  Building wheel for onnxsim (setup.py): started
  Building wheel for onnxsim (setup.py): still running...
  Building wheel for onnxsim (setup.py): still running...
  Building wheel for onnxsim (setup.py): still running...
  Building wheel for onnxsim (setup.py): finished with status 'done'
  Created wheel for onnxsim: filename=onnxsim-0.4.36-cp310-cp310-linux_aarch64.whl size=2021905 sha256=911a34d1434ef08ea45d13b7de9bd8a8e26797a8f44f24bc5c939264f032177e
  Stored in directory: /tmp/pip-ephem-wheel-cache-gvukjngj/wheels/51/a9/c1/cbe7d070e6fa8da6040770c3f71353ff5dc586bdb128546685
Successfully built onnxsim
Installing collected packages: pygments, mdurl, markdown-it-py, rich, onnxsim
  Attempting uninstall: pygments
    Found existing installation: Pygments 2.11.2
    Uninstalling Pygments-2.11.2:
      Successfully uninstalled Pygments-2.11.2
Successfully installed markdown-it-py-3.0.0 mdurl-0.1.2 onnxsim-0.4.36 pygments-2.18.0 rich-13.7.1

requirements: AutoUpdate success ✅ 236.1s, installed 1 package: ['onnxsim>=0.4.33']
requirements: ⚠️  Restart runtime or rerun command for updates to take effect

ONNX: export failure ❌ 236.1s: cannot import name '_message' from 'google.protobuf.pyext' (/usr/local/lib/python3.10/dist-packages/google/protobuf/pyext/__init__.py)
TensorRT: export failure ❌ 236.1s: cannot import name '_message' from 'google.protobuf.pyext' (/usr/local/lib/python3.10/dist-packages/google/protobuf/pyext/__init__.py)
Traceback (most recent call last):
  File "/usr/local/bin/yolo", line 8, in <module>
    sys.exit(entrypoint())
  File "/usr/local/lib/python3.10/dist-packages/ultralytics/cfg/__init__.py", line 583, in entrypoint
    getattr(model, mode)(**overrides)  # default args from model
  File "/usr/local/lib/python3.10/dist-packages/ultralytics/engine/model.py", line 602, in export
    return Exporter(overrides=args, _callbacks=self.callbacks)(model=self.model)
  File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/ultralytics/engine/exporter.py", line 300, in __call__
    f[1], _ = self.export_engine()
  File "/usr/local/lib/python3.10/dist-packages/ultralytics/engine/exporter.py", line 142, in outer_func
    raise e
  File "/usr/local/lib/python3.10/dist-packages/ultralytics/engine/exporter.py", line 137, in outer_func
    f, model = inner_func(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/ultralytics/engine/exporter.py", line 676, in export_engine
    f_onnx, _ = self.export_onnx()  # run before trt import https://github.com/ultralytics/ultralytics/issues/7016
  File "/usr/local/lib/python3.10/dist-packages/ultralytics/engine/exporter.py", line 142, in outer_func
    raise e
  File "/usr/local/lib/python3.10/dist-packages/ultralytics/engine/exporter.py", line 137, in outer_func
    f, model = inner_func(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/ultralytics/engine/exporter.py", line 389, in export_onnx
    import onnx  # noqa
  File "/usr/local/lib/python3.10/dist-packages/onnx/__init__.py", line 75, in <module>
    from onnx import serialization
  File "/usr/local/lib/python3.10/dist-packages/onnx/serialization.py", line 16, in <module>
    import google.protobuf.json_format
  File "/usr/local/lib/python3.10/dist-packages/google/protobuf/json_format.py", line 30, in <module>
    from google.protobuf import descriptor
  File "/usr/local/lib/python3.10/dist-packages/google/protobuf/descriptor.py", line 28, in <module>
    from google.protobuf.pyext import _message
ImportError: cannot import name '_message' from 'google.protobuf.pyext' (/usr/local/lib/python3.10/dist-packages/google/protobuf/pyext/__init__.py)
Error: building at STEP "RUN cd / && yolo export model=yolov8n.pt format=engine device=0": while running runtime: exit status 1

And the beginning of my Containerfile:

FROM nvcr.io/nvidia/l4t-ml:r36.2.0-py3 AS builder

RUN pip install --upgrade pip
RUN apt upgrade -y
RUN pip install ultralytics protobuf grpcio-tools

# Install grpcio-tools and compile protobuf protocols
COPY ./protobuf /protobuf
RUN cd /protobuf/ && /usr/bin/python3 -m grpc_tools.protoc -I./ --python_out=. --pyi_out=. --grpc_python_out=. ./yoloserving.proto

# Compile TensorRT engine using YOLO model/package
RUN cd / && yolo export model=yolov8n.pt format=engine device=0

Is there any requirements list that is safe to use with Jetpack 6?

I run this container (nvcr.io/nvidia/l4t-ml:r36.2.0-py3), pip install ultralytics and export the model:

yolo export model=yolov8n.pt format=engine device=0

and it worked! So I think the problem is that there is no way to build using the new CDI stack:

podman build -f docker/yoloserver.Dockerfile --security-opt=label=disable --device=nvidia.com/gpu=all -t quay.io/whatever

I cannot use the --device flag as I do in the podman run command:

Error: creating build executor: getting info of source device nvidia.com/gpu=all: stat nvidia.com/gpu=all: no such file or directory

what’s the recommended way here?

Hi,

gpu all is a desktop flag.
For Jetson, please add --runtime=nvidia, and no need to pass the gpu all command since Jetson is a single GPU device.

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.