Build Docker Images for DeepStream on Jetson - Mounted volumes

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
Jetson
• DeepStream Version
6.0
• JetPack Version (valid for Jetson only)
4.6
• TensorRT Version
8.0.1.6-1+cuda10.2
• Issue Type( questions, new requirements, bugs)
Req/Bug
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Hi! How are you?
I will try to explain our little problem, maybe you can clarify this to us.

  1. Pull an official image from NGC for DeepStream6.0 on Jetson, e.g.: http://nvcr.io/nvidia/deepstream-l4t:6.0-triton
  2. Write a Dockerfile. Add configurations in order to leave prepared Triton, i.e.:
cd /opt/nvidia/deepstream/deepstream-6.0/samples
./triton_backend_setup.sh
apt-get update && apt-get install ffmpeg
./prepare_classification_test_video.sh
./prepare_ds_triton_model_repo.sh
  1. It will fail because prepare_ds_triton_model_repo.sh looks for TrTExec, CUDA, libraries, etc, and won’t work.

Is this last step necessary to have the Triton Server working on Jetson? This is kind of a blocker to build our custom image. We want to use Triton Server with TensorFlow and ONNX at least, on our custom models.

It is logical the strategy of mounting libraries from the host in this platform. However, it could be a potential problem in cases where access to CUDA, or other binaries is required on the build time of Docker.

Another example, we tried to build TorchScript support for the Triton backend, but we also missed libraries.

Thanks!

If your docker does not need the sample triton model DeepStream provides, this step can be skipped

1 Like

Hi and thanks @mchi !
One last question, what would you recommend in the case that we want to build a library in Docker build, and have cuda dependencies missing? Is there some way to revert this behavior?

At this moment the only option I see is to build a completely new DeepStream container and copy the libraries inside. I’ve been testing the base deepstream and it has the same volume mounting.

why do you want to include CUDA into DS docker?

We like the workflow of using Docker for the Jetson nano. It is nice to have a stable Docker image from Nvidia available for any member of our teams/projects.

Then, it is good for us to extend that Docker image functionality with a recipe (the dockerfile) to include other features. We added some features, layers, successfully.
Now in addition, we are trying to compile the Triton PyTorch support during the docker build.

The specific steps for this are off-topic, but I will share so you know what I’m talking about:

# ...
# https://github.com/triton-inference-server/pytorch_backend

# Build PyTorch support for Triton
RUN apt-get install -y cmake patchelf rapidjson-dev python3-dev
# CMake 3.18 is required to build the backend
WORKDIR /opt
RUN git clone https://github.com/triton-inference-server/pytorch_backend.git
RUN wget https://github.com/Kitware/CMake/releases/download/v3.22.0-rc2/cmake-3.22.0-rc2-linux-aarch64.sh
RUN bash cmake-3.22.0-rc2-linux-aarch64.sh --prefix=/usr/local --exclude-subdir --skip-license
RUN mkdir /opt/pytorch_backend/build
WORKDIR /opt/pytorch_backend/build
# This step fails. CUDA missing:
RUN cmake -DCMAKE_INSTALL_PREFIX:PATH=`pwd`/install -DTRITON_PYTORCH_DOCKER_IMAGE="nvcr.io/nvidia/pytorch:21.08-py3" ..
RUN make install

This piece of Dockerfile blocks the build. It forces the workflow to run the container manually, compile the library, exit, and commit the layer. Is that, or compile in the Jetson host, and copy the binary backend during the build.
Those options should work, but they are not the optimal solution for a docker workflow having all these nice containers you set up at NGC.

Thanks.

Ok, so, for now, you have to copy CUDA libraries into docker image

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.