Engine files build failure in Deepstream 6.1.1-devel docker container

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) - nvcr.io/nvidia/deepstream:6.1.1-devel
• DeepStream Version - 6.1.1
• TensorRT Version - 8.4.1
• NVIDIA GPU Driver Version (valid for GPU only) - 520.61.05
• Issue Type( questions, bugs)
• How to reproduce the issue ? run deepstream-test1 app with dstest1_config.yml

root@tensorbook:/opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-test1# ./deepstream-test1-app dstest1_config.yml
Using file: dstest1_config.yml
WARNING: ../nvdsinfer/nvdsinfer_model_builder.cpp:1482 Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-6.1/sources/apps/sample_apps/deepstream-test1/../../../../samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine open error
0:00:03.775129855   337 0x55b4b82d28c0 WARN                 nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1897> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-6.1/sources/apps/sample_apps/deepstream-test1/../../../../samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine failed
0:00:03.776150111   337 0x55b4b82d28c0 WARN                 nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2002> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-6.1/sources/apps/sample_apps/deepstream-test1/../../../../samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine failed, try rebuild
0:00:03.776176758   337 0x55b4b82d28c0 INFO                 nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1923> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: The implicit batch dimension mode has been deprecated. Please create the network with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag whenever possible.
WARNING: [TRT]: GPU error during getBestTactic: conv1 + bn_conv1 + activation_1/Relu : invalid configuration argument
[ ERROR: CUDA Runtime ] invalid configuration argument
ERROR: [TRT]: 1: [caskBuilderUtils.h::transform::204] Error Code 1: Cask (CASK Transform Weights Failed)
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:1119 Build engine failed from config file
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:811 failed to build trt engine.
0:00:12.144044735   337 0x55b4b82d28c0 ERROR                nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1943> [UID = 1]: build engine file failed
0:00:12.145486187   337 0x55b4b82d28c0 ERROR                nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2029> [UID = 1]: build backend context failed
0:00:12.145510831   337 0x55b4b82d28c0 ERROR                nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1266> [UID = 1]: generate backend failed, check config file settings
0:00:12.145524724   337 0x55b4b82d28c0 WARN                 nvinfer gstnvinfer.cpp:846:gst_nvinfer_start:<primary-nvinference-engine> error: Failed to create NvDsInferContext instance
0:00:12.145528266   337 0x55b4b82d28c0 WARN                 nvinfer gstnvinfer.cpp:846:gst_nvinfer_start:<primary-nvinference-engine> error: Config file path: dstest1_pgie_config.yml, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Running...
ERROR from element primary-nvinference-engine: Failed to create NvDsInferContext instance
Error details: gstnvinfer.cpp(846): gst_nvinfer_start (): /GstPipeline:dstest1-pipeline/GstNvInfer:primary-nvinference-engine:
Config file path: dstest1_pgie_config.yml, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Returned, stopping playback
Deleting pipeline

Which GPU and cuda version you are using?

01:00.0 VGA compatible controller: NVIDIA Corporation GA104M [GeForce RTX 3080 Mobile / Max-Q 8GB/16GB] (rev a1)
CUDA Driver Version in host OS - 520.61.05
CUDA Toolkit Version in host OS - 11.8
CUDA Version in DeepStream-6.1.1-devel container- 11.7
TRT Version in DeepStream-6.1.1-devel container- 8.4.1

For DS 6.1.1, you should use below version:

  • NVIDIA driver 515.65.01
  • CUDA 11.7 Update 1

my host OS doesn’t have the CUDA toolkit. I have only the latest CUDA driver in my host OS and Nvidia-docker for ochestration. How does the host OS driver and Cuda version effect the docker containers dependencies? DeepSteam-6.1.1 comes with CUDA 11.7 Update 1 and other dependencies. I don’t get this issue with DeepSteam-6.0.1 eventhough I have cuda driver 520.61.05 in my host OS. @Amycao, your answer doesn’t address my issue technically.

===》How does the host OS driver and Cuda version effect the docker containers dependencies?
Some cuda APIs dependence your host driver. So both of them need to be adapted. Also the cuda and tensorRT are mapped to your host for reducing the size of the docker image.
We suggest you use our officially recommended version and install it step by step. And with the upgrade of deepstream version, the dependencies will change.
dGPU Setup for Ubuntu

1 Like

This explanation answers my question. If the cuda and tensorRT are mapped to my host for reducing the size of the docker image, it explains why I need the exact cuda and TRT versions in my host. Previousely I though DS image comes with all the cuda and TRT runtime dependencies inside. Since the cuda driver API is backward compatible with earlier cuda runtime versions, I used to have only the latest cuda driver in my host OS. Thank you.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.