[QUESTION] Best practices for deploying DeepStream applications using Docker

• Hardware Platform (Jetson / GPU)
Both (Jetson AGX Orin 32GB / RTX4080)
• DeepStream Version
latest(now 6.3)
• JetPack Version (valid for Jetson only)
latest(now 5.1.2)
• TensorRT Version
latest
• NVIDIA GPU Driver Version (valid for GPU only)
latest on Ubuntu 20.04
• Issue Type( questions, new requirements, bugs)
questions
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Now we are using DeepStream to develop a computer vision related product. As mentioned before, it is cross-platform. We plan to use Docker for deployment.

Our current packaging method is to use the officially provided image as the base image and copy the compiled program to the base image, which is the so-called multistage build. However, the compressed size of the basic image (deepstream:6.3-triton-multiarch) is 12.53GB, and the image we make will only be larger.
We don’t know if this is the officially recommended packaging method.

Our question is, how to package a production-grade Docker image? Are there any official best practices?

The official images I’m talking about are:
DeepStream x86 Containers
DeepStream Jetson containers

The dockerfile is open source NVIDIA-AI-IOT/deepstream_dockers: A project demonstrating how to make DeepStream docker images. (github.com)

You can customize according to your requirements.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.