Loading yolov3-tiny model within deepstream-5.1 got "Magic tag does not match" error

I was using yolov3-tiny model in deepstream-5.0, which was working well. Now I’d like to upgrade deepstream to 5.1, yet I could not launch the program.

The error is shown below. It complained

ERROR: …/nvdsinfer/nvdsinfer_func_utils.cpp:33 [TRT]: coreReadArchive.cpp (32) - Serialization Error in verifyHeader: 0 (Magic tag does not match)
ERROR: …/nvdsinfer/nvdsinfer_func_utils.cpp:33 [TRT]: INVALID_STATE: std::exception
ERROR: …/nvdsinfer/nvdsinfer_func_utils.cpp:33 [TRT]: INVALID_CONFIG: Deserialize the cuda engine failed.
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:1567 Deserialize engine failed from file: [model file]
0:00:10.263181819 3903 0x7234190 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1691> [UID = 1]: deserialize engine from file :[model file] failed
0:00:10.263268977 3903 0x7234190 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1798> [UID = 1]: deserialize backend context from engine from file :[model file] failed, try rebuild
0:00:10.263289000 3903 0x7234190 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1716> [UID = 1]: Trying to create engine from model files
Yolo config file or weights file is NOT specified.
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:797 Failed to create network using custom network creation function
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:862 Failed to get cuda engine from custom library API
0:00:10.263700360 3903 0x7234190 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1736> [UID = 1]: build engine file failed
0:00:10.263731702 3903 0x7234190 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1822> [UID = 1]: build backend context failed
0:00:10.263781033 3903 0x7234190 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1149> [UID = 1]: generate backend failed, check config file settings
0:00:10.264850711 3903 0x7234190 WARN nvinfer gstnvinfer.cpp:812:gst_nvinfer_start: error: Failed to create NvDsInferContext instance
0:00:10.264883656 3903 0x7234190 WARN nvinfer gstnvinfer.cpp:812:gst_nvinfer_start: error: Config file path: [config file], NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Error: gst-resource-error-quark: Failed to create NvDsInferContext instance (1): gstnvinfer.cpp(812): gst_nvinfer_start (): /GstPipeline:pipeline0/GstNvInfer:primary-inference:
Config file path: [config file], NvDsInfer Error: NVDSINFER_CONFIG_FAILED

The program worked well in deepstream-5.0 environment (base image nvcr.io/nvidia/deepstream:5.0-20.07-triton), I just used a different base image (nvcr.io/nvidia/deepstream:5.1-21.02-triton) to rebuild a new docker image, then running the program got above error.

Does anyone know if deepstream-5.1 supports yolov3-tiny model? Or does deepstream-5.1 support yolov4 model? It would be great if deepstream-5.1 or deepstream-6.0 could support yolov4 model. I also encountered some problems with that which were reported at How to let deepstream-6.0 use all gpu cards.

Hi @bridge ,

I hope you are OK with me moving your topic to the dedicated Deepstream SDK forum? There you will receive much better support for your issue.

Thanks!

Thanks @MarkusHoHo , you’ve already moved this topic to the dedicated deepstream SDK forum, right? It looks to me it’s already in it now.

1 Like

Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Yes I did and passed you into the capable care of yingliu.

See above.

Serialization Error in verifyHeader: 0 (Magic tag does not match)

When you run with different TRT version, engine will be rebuilt. this is expected.

<nvdsinfer_context_impl.cpp:1716> [UID = 1]: Trying to create engine from model files
Yolo config file or weights file is NOT specified.
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:797 Failed to create network using custom network creation function

Please check yolo config files.
We support yolov3-tiny in deepstream 5.1

We support yolov4 model in deepstream_tao_apps from version 5.1

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.