Error happens when update model through OTA

Please provide complete information as applicable to your setup.

Hardware Platform (Jetson / GPU): jetson Xavier Nx
    DeepStream Version: 6.0
   JetPack Version (valid for Jetson only): 4.6.0
   TensorRT Version: 8001
   Reproduction: 
   1、before update, the pipeline runs well
   2、use the timeout callback to detect if need update
   3、if update, then call the following api to reset model engine file
         pgie.set_property("model-engine-file", model_file)
        (the model_file is same as before, just for test)
   4、error logs:
        0:00:19.580137584  1399   0x7f2c080f50 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2004> [UID = 1]: Use deserialized engine model: model/model_2023-11-11/yolox_s.engine

0:00:19.694541567 1399 0x807d320 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus: [UID 1]: Load new model:model/model_2023-11-11/yolox_s.engine sucessfully
0:00:19.717468515 1399 0x807c0a0 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::parseBoundingBox() <nvdsinfer_context_impl_output_parsing.cpp:59> [UID = 1]: Could not find output coverage layer for parsing objects
0:00:19.717600997 1399 0x807c0a0 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::fillDetectionOutput() <nvdsinfer_context_impl_output_parsing.cpp:735> [UID = 1]: Failed to parse bboxes
0:00:19.740391271 1399 0x807c0a0 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::parseBoundingBox() <nvdsinfer_context_impl_output_parsing.cpp:59> [UID = 1]: Could not find output coverage layer for parsing objects

But, the model engine file is same as before, and the postprecess lib is the same too, why error happens.

please correct the output-blob-names
s value in the configuration file. you can get the explanation in this link.

if still have error. please provide more information. which sample are you testing? could you share the whole running logs?

I added output-blob-names, still errors.
The code is reimplemented based on many sample, such as test1, test5.
log.txt (45.2 KB)

from the logs, the model’s output layer name is “output”. output-blob-names should be set to output. if still have error, please share the nvinfer’s configuration file.

dstest_pgie_config.txt (2.7 KB)

please remove “bboxes” in output-blob-names setting because the model has no this layer name.

Removed. still have error.
A strange thing, even if I set the output-blob-names to the wrong value, such as coverage, the pipeline runs well except update model.

is there still “Failed to parse bboxes” this kind of error? please share the currently configuration file and running log.

yes
dstest_pgie_config.txt (2.7 KB)
log.txt (82.0 KB)

nvinfer plugin is opensource. this “Could not find output coverage layer for parsing objects” error s in DetectPostprocessor::parseBoundingBox.
did you use the correct configuration file? if setting parse-bbox-func-name and custom-lib-path, nvinfer will use custom parsing function. you can add log in DetectPostprocessor::fillDetectionOutput to check why custom parsing function did not take effect. especially you need to rebuild plugin and replace /opt/nvidia/deepstream/deepstream/lib/gst-plugins/libnvdsgst_infer.so with the new so after modifying the code.

Before update model, the pipeline runs well, so the custom parsing function is correct, error happens only when I update the model while the pipeline is running.

Any way, I will debug the infer code, thank you.

Thanks for the sharing. it is because “setting parse-bbox-func-name” and “custom-lib-path” are not reused after updating model. here is a solution.

  1. In function DsNvInferImpl::initNewInferModelParams of /opt/nvidia/deepstream/deepstream/sources/gst-plugins/gst-nvinfer/gstnvinfer_impl.cpp, add the following code in bold.
        ......
        sizeof (newParams.labelsFilePath));
  }
  //start
  if (string_empty (newParams.customLibPath)
      && !string_empty (oldParams.customLibPath)) {
      g_strlcpy (newParams.customLibPath, oldParams.customLibPath,
        sizeof (newParams.customLibPath));
  }
  if (string_empty (newParams.customBBoxParseFuncName)
      && !string_empty (oldParams.customBBoxParseFuncName)) {
      g_strlcpy (newParams.customBBoxParseFuncName, oldParams.customBBoxParseFuncName,
        sizeof (newParams.customBBoxParseFuncName));
  }  //end
  1. then rebuild libnvdsgst_infer.so according to readme, then replace /opt/nvidia/deepstream/deepstream/lib/gst-plugins/libnvdsgst_infer.so with the new so.

Yes, you are right, the solution worked.

Thanks a lot ^_^

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.