On-the-fly model update doesn't work

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) Jetson
**• DeepStream Version 6.1
• JetPack Version (valid for Jetson only) docker
• TensorRT Version N/A
**• NVIDIA GPU Driver Version (valid for GPU only) 525
**• Issue Type( questions, new requirements, bugs) questions
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Hello,
currently i’m trying the feature. i saw the feature on deepstream-test5 example, i followed the example on my project but it doesn’t work.

when the pipeline is running, i used
g_object_set (G_OBJECT (nvinfer), "model-engine-file", model_engine_file_path, NULL);
but i got this error
0:00:33.671413975 51304 0x7f3dc0004d50 ERROR nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initResource() <nvdsinfer_context_impl.cpp:821> [UID = 1]: cluster mode 0 not supported with instance segmentation
ERROR: nvdsinfer_context_impl.cpp:1074 Infer Context failed to initialize post-processing resource, nvinfer error:NVDSINFER_CONFIG_FAILED
ERROR: nvdsinfer_context_impl.cpp:1280 Infer Context prepare postprocessing resource failed., nvinfer error:NVDSINFER_CONFIG_FAILED
0:00:33.680632385 51304 0x7f3dc0004d50 WARN nvinfer gstnvinfer_impl.cpp:331:notifyLoadModelStatus: warning: [UID 1]: Load new model:/workspace/build/models/pose/RepVGG_B1-230923-101712-cocokp-edge321-o10s.pkl.epoch500.769x433.dbz.onnx_fp16_xxxx_bs1.engine failed, reason: Creation new model context failed

Could you explain what is the problem ?

Can you elaborate how you set the model-engine-file when the pipeline is running? Did you use option -o <config file> as mentioned in “section 7” in the deepstream-test5-app/README

@yingliu I run the example, it worked, however when i apply it to my application, i got the error.
are there any special step in setting the pipeline to the feature can work ?

currently i just declare nvinfer as a global variable and call below command in another thread for on-the-fly model update simulation .
g_object_set (G_OBJECT (nvinfer), "model-engine-file", model_engine_file_path, NULL);

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

No. But you need to ensure the following conditions:

on-the-fly
  1. Option -o
    GIE Model update configuration override file; {provide path of override config file}

    This option is used to demonstrate the on-the-fly model update feature.
    deepstream-test5-app to be launched with -o <ota_override_file> option to test
    on-the-fly OTA functionality.

    Steps to test the OTA functionality

    1. Run deepstream-test5-app with -o <ota_override_file> option
    2. While DS application is running, update the <ota_override_file> with new model details
      and save it
    3. File content changes gets detected by deepstream-test5-app and then it starts
      model-update process

    Currently only model-update feature is supported as a part of OTA functionality.
    Assumption for On-The-Fly model updates are as below:

    1. New model must have same network parameter configuration as of previous model
      (e.g. network resolution, network architecture, number of classes)
    2. Engine file or cache file of new model to be provided by developer
    3. Other primary gie configuration parameters like group-threshold, bbox color, gpu-id,
      nvbuf-memory-type etc updated parameters if provided in the override file,
      will not have any effect after model switch.
    4. Secondary gie model-update is not validated, primary model-update is validated
    5. No frame drop / frames without inference should be observed during on-the-fly
      model update process
    6. In case of model update failure, error message will be printed on the console and
      pipeline should continue to run with older model configuration
    7. config-file parameter is needed to suppress the config file parsing error prints,
      values from this config file are not used during model switch process
1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.