Docker Images for YOLOv8 Implementation on JetPack 6.2

I’m seeking guidance regarding appropriate Docker images for my current project on the Jetson Orin Nano. As this is my first time working with Docker containers and NVIDIA NGC images, I would appreciate clear recommendations. I’ve recently upgraded to JetPack 6.2 (L4T 36.4.3) and I’m looking for a compatible Docker image that will support the following requirements:

  1. YOLOv8 integration for real-time object detection
  2. CSI camera input processing
  3. CUDA and TensorRT acceleration for optimal inference performance
  4. Support for video processing and encoding/decoding

Could someone please recommend the most appropriate NGC container image for the Jetson Orin Nano that would be compatible with JetPack 6.2? I’ve noticed some version mismatches when attempting to use various l4t-ml tags.

Alternatively, if there’s a recommended approach to building a custom Docker image specifically for these requirements on JetPack 6.2, I would appreciate any guidance or best practices.

Thank you for your assistance.

(DOCKER version 28.0.1)

Hi,

The Ultralytics team has provided the container for YOLO so you can give it a try:

If you are finding the container from our side, the container with PyTorch preinstalled can be found in the below link:
Please use the container with the iGPU tag for the Jetson.

Thanks.

1 Like

Response Regarding Docker Solutions for Jetson Orin Nano

Thank you for your support and response to my initial inquiry about Docker images for my Jetson Orin Nano project.

Initial Inquiry

I sought recommendations for Docker images compatible with JetPack 6.2 (L4T 36.4.3) that would support YOLOv8 integration, CSI camera input processing, CUDA and TensorRT acceleration, and video processing capabilities.

Testing Process

Based on your recommendations, I tested two Docker images:

  1. Ultralytics Docker image (as suggested in the documentation link)
  2. NVIDIA PyTorch Docker image with the iGPU tag

System Configuration

  • JetPack 6.2 (L4T 36.4.3)
  • Raspberry Pi Camera Module 3 WIDE connected via CSI
  • Docker version 28.0.1

Implementation Challenges

Initially, I attempted to run a simple Python script to access the camera but encountered permission issues with both containers. Despite the clear documentation, I struggled with properly configuring container permissions for camera access.

Resolution

After extensive research in forums, I identified that the container creation process required specific permissions for both the Jetson device and the operating system. The Ultralytics Docker image ultimately worked successfully with the correct permission configuration.

I was able to achieve real-time object detection using the YOLOv11 model. For future developers who may encounter similar issues, I’ve included the successful Docker run command below:

sudo docker run -it \
--runtime nvidia --gpus all --ipc=host \
--privileged \
--device /dev/video0 \
--group-add video \
-v /tmp/argus_socket:/tmp/argus_socket \
-v /lib/modules:/lib/modules \
-e DISPLAY=$DISPLAY \
-v /tmp/.X11-unix:/tmp/.X11-unix \
--name project \
ultralytics/ultralytics:latest-jetson-jetpack6

*Note: The last line contains the Docker image name, which should be customized according to specific requirements. The NVIDIA PyTorch image (nvcr.io/nvidia/pytorch:25.03-py3-igpu) can be substituted if preferred, though it did not work in my specific implementation.

Thank you again for your assistance. This setup now fully meets my project requirements for YOLOv8 integration, CSI camera input, and accelerated inference performance.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.