This is my setup.
• Hardware Platform (GPU)
• Docker besed on nvcr.io/nvidia/deepstream:6.1.1-devel
• DeepStream Version: 6.1.1
• TensorRT Version: 8.4.1.5
• NVIDIA GPU Driver Version: Driver Version: 515.65.01 CUDA Version: 11.7
• Issue Type: How to solve?
• How to reproduce the issue ? Giving access to the camera to the docker, using the command “python resnet50.py -i /dev/video0”
Hello
I create a resnet50 model in pytorch by means of
model = torchvision.models.resnet50(weights=torchvision.models.ResNet50_Weights.DEFAULT)
I export it to ONNX using
BATCH_SIZE=1
onnx_file = f"resnet50_pytorch_BS{BATCH_SIZE}.onnx"
dummy_input=torch.randn(BATCH_SIZE, 3, 224, 224)
torch.onnx.export(model, dummy_input, onnx_file, verbose=False)
I export it to engine using
USE_FP16 = True
target_dtype = np.float16 if USE_FP16 else np.float32
tensorrt_file = f"resnet50_engine_pytorch_BS{BATCH_SIZE}.engine"
if not os.path.exists(tensorrt_file):
if USE_FP16:
!/usr/src/tensorrt/bin/trtexec --onnx=resnet50_pytorch_BS{BATCH_SIZE}.onnx --saveEngine={tensorrt_file} --explicitBatch --inputIOFormats=fp16:chw --outputIOFormats=fp16:chw --fp16
else:
!/usr/src/tensorrt/bin/trtexec --onnx=resnet50_pytorch_BS{BATCH_SIZE}.onnx --saveEngine={tensorrt_file} --explicitBatch
else:
print(f"{tensorrt_file} engine already exists")
And finally I try to load it in deepstream with the following configuration file
[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-engine-file=../Primary_Detector/model_files/resnet50_engine_pytorch_BS1.engine
labelfile-path=../Primary_Detector/label_files/imagenet_labels.txt
force-implicit-batch-dim=1
batch-size=1
network-mode=1
num-detected-classes=4
interval=0
gie-unique-id=1
classifier-threshold=0.7
is-classifier=1
[class-attrs-all]
pre-cluster-threshold=0.2
eps=0.2
group-threshold=1
When I run my code I get the following error
ERROR: [TRT]: 6: The engine plan file is not compatible with this version of TensorRT, expecting library version 8.4.1.5 got 8.4.3.1, please rebuild.
ERROR: [TRT]: 4: [runtime.cpp::deserializeCudaEngine::50] Error Code 4: Internal Error (Engine deserialization failed.)
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:1528 Deserialize engine failed from file: /opt/nvidia/deepstream/deepstream-6.1/sources/deepstream_apps/Primary_Detector/model_files/resnet50_engine_pytorch_BS1.engine
0:00:03.563087247 356 0x35660d0 WARN nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1897> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-6.1/sources/deepstream_apps/Primary_Detector/model_files/resnet50_engine_pytorch_BS1.engine failed
0:00:03.598445142 356 0x35660d0 WARN nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2002> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-6.1/sources/deepstream_apps/Primary_Detector/model_files/resnet50_engine_pytorch_BS1.engine failed, try rebuild
0:00:03.598617726 356 0x35660d0 INFO nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1923> [UID = 1]: Trying to create engine from model files
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:860 failed to build network since there is no model file matched.
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:799 failed to build network.
0:00:04.650685827 356 0x35660d0 ERROR nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1943> [UID = 1]: build engine file failed
0:00:04.685967709 356 0x35660d0 ERROR nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2029> [UID = 1]: build backend context failed
0:00:04.685994375 356 0x35660d0 ERROR nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1266> [UID = 1]: generate backend failed, check config file settings
0:00:04.686021051 356 0x35660d0 WARN nvinfer gstnvinfer.cpp:846:gst_nvinfer_start:<primary-inference> error: Failed to create NvDsInferContext instance
0:00:04.686028152 356 0x35660d0 WARN nvinfer gstnvinfer.cpp:846:gst_nvinfer_start:<primary-inference> error: Config file path: ds_resnet50.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
I have not put the .py because I think it is not relevant, but if necessary I will put it