Hardware Platform (Jetson / GPU) with deepstream-6.3-samples docker as well as deepstream-6.3-triton-multiarch
• DeepStream Version - 6.3 used
• JetPack Version (valid for Jetson only)
• TensorRT Version - 8.5.3.1
• NVIDIA GPU Driver Version (valid for GPU only) - 535
• Issue Type( questions, new requirements, bugs) - bug
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) default deepstream python app , no changes.
When running the python deepstream apps inside deepstream-6.3 samples docker container , engine creation fails with following
WARNING: ../nvdsinfer/nvdsinfer_model_builder.cpp:1487 Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-6.3/sources/deepstream_python_apps/apps/deepstream-test2/../../../../samples/models/Secondary_VehicleTypes/resnet18_vehicletypenet.etlt_b16_gpu0_int8.engine open error
0:00:03.047921344 5021 0x397ea30 WARN nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:<secondary2-nvinference-engine> NvDsInferContext[UID 3]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1976> [UID = 3]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-6.3/sources/deepstream_python_apps/apps/deepstream-test2/../../../../samples/models/Secondary_VehicleTypes/resnet18_vehicletypenet.etlt_b16_gpu0_int8.engine failed
0:00:03.134130882 5021 0x397ea30 WARN nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:<secondary2-nvinference-engine> NvDsInferContext[UID 3]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2081> [UID = 3]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-6.3/sources/deepstream_python_apps/apps/deepstream-test2/../../../../samples/models/Secondary_VehicleTypes/resnet18_vehicletypenet.etlt_b16_gpu0_int8.engine failed, try rebuild
0:00:03.135437853 5021 0x397ea30 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<secondary2-nvinference-engine> NvDsInferContext[UID 3]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2002> [UID = 3]: Trying to create engine from model files
ERROR: [TRT]: 3: [builder.cpp::~Builder::307] Error Code 3: API Usage Error (Parameter check failed at: optimizer/api/builder.cpp::~Builder::307, condition: mObjectCounter.use_count() == 1. Destroying a builder object before destroying objects it created leads to undefined behavior.
)
NvDsInferCudaEngineGetFromTltModel: Failed to open TLT encoded model file /opt/nvidia/deepstream/deepstream-6.3/sources/deepstream_python_apps/apps/deepstream-test2/../../../../samples/models/Secondary_VehicleTypes/resnet18_vehicletypenet.etlt
ERROR: [TRT]: 3: [builder.cpp::~Builder::307] Error Code 3: API Usage Error (Parameter check failed at: optimizer/api/builder.cpp::~Builder::307, condition: mObjectCounter.use_count() == 1. Destroying a builder object before destroying objects it created leads to undefined behavior.
)
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:728 Failed to create network using custom network creation function
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:794 Failed to get cuda engine from custom library API
0:00:05.380050980 5021 0x397ea30 ERROR nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger:<secondary2-nvinference-engine> NvDsInferContext[UID 3]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2022> [UID = 3]: build engine file failed
ERROR: [TRT]: 2: [logging.cpp::decRefCount::65] Error Code 2: Internal Error (Assertion mRefCount > 0 failed. )
corrupted size vs. prev_size while consolidating
Aborted (core dumped)
Any ideas what might be causing this ? Normal apps run fine including c++ ones but python apps are what are causing the issue.