Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
The default behaviour seems to have change with docker
docker run --rm nvcr.io/nvidia/deepstream:6.0-devel echo hello
docker run --rm nvcr.io/nvidia/deepstream:6.3-gc-triton-devel echo hello
=============================== DeepStreamSDK 6.3.0 =============================== *** LICENSE AGREEMENT *** By using this software you agree to fully comply with the terms and conditions of the License Agreement. The License Agreement is located at /opt/nvidia/deepstream/deepstream/LicenseAgreement.pdf. If you do not agree to the terms and conditions of the License Agreement do not use the software. ============================= == Triton Inference Server == ============================= NVIDIA Release 23.03 (build 56086596) Triton Server Version 2.32.0 Copyright (c) 2018-2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved. Various files include modifications (c) NVIDIA CORPORATION & AFFILIATES. All rights reserved. This container image and its contents are governed by the NVIDIA Deep Learning Container License. By pulling and using the container, you accept the terms and conditions of this license: https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license WARNING: The NVIDIA Driver was not detected. GPU functionality will not be available. Use the NVIDIA Container Toolkit to start this container with GPU support; see https://docs.nvidia.com/datacenter/cloud-native/ .
How do I get the old behaviour back?