I was using yolov3-tiny model in deepstream-5.0, which was working well. Now I’d like to upgrade deepstream to 5.1, yet I could not launch the program.
The error is shown below. It complained
ERROR: …/nvdsinfer/nvdsinfer_func_utils.cpp:33 [TRT]: coreReadArchive.cpp (32) - Serialization Error in verifyHeader: 0 (Magic tag does not match)
ERROR: …/nvdsinfer/nvdsinfer_func_utils.cpp:33 [TRT]: INVALID_STATE: std::exception
ERROR: …/nvdsinfer/nvdsinfer_func_utils.cpp:33 [TRT]: INVALID_CONFIG: Deserialize the cuda engine failed.
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:1567 Deserialize engine failed from file: [model file]
0:00:10.263181819 3903 0x7234190 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1691> [UID = 1]: deserialize engine from file :[model file] failed
0:00:10.263268977 3903 0x7234190 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1798> [UID = 1]: deserialize backend context from engine from file :[model file] failed, try rebuild
0:00:10.263289000 3903 0x7234190 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1716> [UID = 1]: Trying to create engine from model files
Yolo config file or weights file is NOT specified.
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:797 Failed to create network using custom network creation function
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:862 Failed to get cuda engine from custom library API
0:00:10.263700360 3903 0x7234190 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1736> [UID = 1]: build engine file failed
0:00:10.263731702 3903 0x7234190 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1822> [UID = 1]: build backend context failed
0:00:10.263781033 3903 0x7234190 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1149> [UID = 1]: generate backend failed, check config file settings
0:00:10.264850711 3903 0x7234190 WARN nvinfer gstnvinfer.cpp:812:gst_nvinfer_start: error: Failed to create NvDsInferContext instance
0:00:10.264883656 3903 0x7234190 WARN nvinfer gstnvinfer.cpp:812:gst_nvinfer_start: error: Config file path: [config file], NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Error: gst-resource-error-quark: Failed to create NvDsInferContext instance (1): gstnvinfer.cpp(812): gst_nvinfer_start (): /GstPipeline:pipeline0/GstNvInfer:primary-inference:
Config file path: [config file], NvDsInfer Error: NVDSINFER_CONFIG_FAILED
The program worked well in deepstream-5.0 environment (base image nvcr.io/nvidia/deepstream:5.0-20.07-triton), I just used a different base image (nvcr.io/nvidia/deepstream:5.1-21.02-triton) to rebuild a new docker image, then running the program got above error.
Does anyone know if deepstream-5.1 supports yolov3-tiny model? Or does deepstream-5.1 support yolov4 model? It would be great if deepstream-5.1 or deepstream-6.0 could support yolov4 model. I also encountered some problems with that which were reported at How to let deepstream-6.0 use all gpu cards.