Problem with DeepStream Pipeline using Docker

Hi, I got an error when running a Python DeepStream pipeline using a Docker container. My pipeline is a simple YOLOv8 object detection with JPEG parsing. When I start the container, the pipeline output is okay. But when I exec into the container again, the model’s output is not working properly. Has anyone encountered the same issue, and how did you resolve it?

Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
• The pipeline being used

• Hardware Platform (Jetson / GPU): GPU
• DeepStream Version: 6.3
• NVIDIA GPU Driver Version (valid for GPU only): 525.125.06
Pipeline:

Can you share the command line you ran to reproduce the problem? I can’t find the problem from the above description

Hi, I have found the root issues. I am running on a GPU hardware platform, and my application has custom gst-plugins with “.cu files” that need to be compiled. Therefore, I need to run the following steps using entrypoint.sh in my Docker container:

a. sudo apt-get install -y cuda-compat-12-0 b. export LD_LIBRARY_PATH=/usr/local/cuda/compat:$LD_LIBRARY_PATH

However, if I restart the Docker container, I get the error which I described above. I also tried to run these steps in my Dockerfile, but it does not work either. How can I resolve this issue?

Is the docker image you are using in the list below?

If you use nvcr.io/nvidia/deepstream:6.3-triton-multiarch as the base image, the above operations are not required.

Yes, I use nvcr.io/nvidia/deepstream:6.3-triton-multiarch as the base image, but if i dont use above operations, it causes the error as I mentioned before

DS-6.3 requires CUDA 12.1, so CUDA12.1 is already installed by default in the docker image.

Why do you need to install cuda-compat-12-0 ?

If custom gst-plugins need to compile “*.cu” files, you should use the same CUDA version as deepstream, otherwise it may not work.

If you want to install multiple versions of CUDA, the correct way is to install them in the Dockerfile.

Only modify entrypoint.sh, it will become invalid when the container is deleted.

Thanks for your assistance in advance. My previous CUDA version was 12.0. After I upgraded, it worked.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.