I was able to run the sample deepstream apps (test1-test3). As a next step, I wanted to run my custom models using triton server. For this, I tried running the image nvcr.io/nvidia/deepstream:5.1-21.02-triton on my k8s cluster having Tesla v100.
However, the image keeps restarting with the below logs:
== DeepStreamSDK 5.1 ==
- LICENSE AGREEMENT ***
By using this software you agree to fully comply with the terms and conditions
of the License Agreement. The License Agreement is located at
/opt/nvidia/deepstream/deepstream-5.0/LicenseAgreement.pdf. If you do not agree
to the terms and conditions of the License Agreement do not use the software.
== Triton Inference Server ==
NVIDIA Release 20.11 (build )
Copyright © 2018-2020, NVIDIA CORPORATION. All rights reserved.
Various files include modifications © NVIDIA CORPORATION. All rights reserved.
NVIDIA modifications are covered by the license terms that apply to the underlying
project or file.
find: File system loop detected; ‘/usr/bin/X11’ is part of the same file system loop as ‘/usr/bin’.
NOTE: Legacy NVIDIA Driver detected. Compatibility mode ENABLED.
NOTE: The SHMEM allocation limit is set to the default of 64MB. This may be
insufficient for the inference server. NVIDIA recommends the use of the following flags:
nvidia-docker run --shm-size=1g --ulimit memlock=-1 --ulimit stack=67108864 …
I tried increasing the resources which didnt help either. Can you please let me know if there are any steps to be taken before running this container or direct me to the documentation which can help me figure out this issue?
• Hardware Platform (Jetson / GPU)
GPU Tesla V100
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
Pull and run the image nvcr.io/nvidia/deepstream:5.1-21.02-triton on a cluster having dgpu Tesla V100.
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)