Deepstream container base vs devel image nvcc

Dear forum, Dear nv-devs,

I’m really confused about the content of the deepstream base image. Its written here deepstream container docu that

Base: The DeepStream base container contains the plugins and libraries that are part of the DeepStream SDK along with dependencies such as CUDA, TensorRT, GStreamer, etc. This image is the recommended one for users that want to create docker images for their own DeepStream based applications. Please note that the base images do not contain sample apps or Graph Composer.

Reading this, I thought that e.g. cuda is part of the base image. However, it appears to be not. See this simple example

nvidia-docker run nvcr.io/nvidia/deepstream:6.0-devel /bin/sh -c "nvcc -V" 

This will print out as expected:

nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2021 NVIDIA Corporation
Built on Sun_Aug_15_21:14:11_PDT_2021
Cuda compilation tools, release 11.4, V11.4.120
Build cuda_11.4.r11.4/compiler.30300941_0

However, using the base image / samples-image:

nvidia-docker run nvcr.io/nvidia/deepstream:6.0-base "nvcc -V"  # I also checked 6.1-base
nvidia-docker run nvcr.io/nvidia/deepstream:6.0-samples "nvcc -V"

returns:

/bin/sh: 1: nvcc: not found

Digging further, it appears that a lot of the binaries contained in the devel-image are not part of the base-image. Also the python versions seem to differ.

It does work when using the triton server however.

Am I misunderstanding something or is this an issue in the respective containers? Could you either fix the base images such that it’s more interchangeable with the devel-image? The current version makes it hard for me to develop anything based on the base-image.

Also there are no cuda binaries in base image

nvidia-docker run nvcr.io/nvidia/deepstream:6.0-base /bin/sh -c "ls /usr/local/cuda/bin"
ls: cannot access '/usr/local/cuda/bin': No such file or directory

Cheers

base docker (contains only the runtime libraries and GStreamer plugins. Can be used as a base to build custom dockers for DeepStream applications)

devel docker (contains the entire SDK along with a development environment for building DeepStream applications and graph composer)

Dear amycao,
I’m still confused on how I am supposed to build anything based on the base-container. Let me quickly summarize where I have problems following you:

  • Okay so the tensorRT runtime is included.
  • However, the model.plan-file needs to be generated for a specific GPU, right? So I still need to convert my onnx-file to a .plan-file for every GPU out there that may use my container beforehand, because there is no way to convert the onnx-file to .plan with the base-container.

Am I missing something here?

There trtexec under /usr/src/tensorrt/bin, you can build your onnx file to engine.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.