Changing config file path property in runtime for nvinfer, nvdsanalytics elements

• Hardware Platform (Jetson / GPU) - Tesla T4 GPU
• DeepStream Version - 7.0
• TensorRT Version - 8.6.1.6
• NVIDIA GPU Driver Version (valid for GPU only) - 535.171.04
• CUDA - 12.2
• Issue Type( questions, new requirements, bugs) - Question

I had a requirement where I need to set the model config file path property in runtime for nvinfer elment continously. For every video I am processing, I will be having model config in a different file path, SO I need to set that each time before processing a video at runtime.

So I modified the code and It is working pretty good as expected, But I am having this warning sometimes

deepstream-2     | 0:09:44.615318721     1 0x791ceb0520b0 WARN                 nvinfer gstnvinfer_impl.cpp:346:notifyLoadModelStatus:<primary-nvinference-engine> warning: [UID 1]: Load new model:/opt/nvidia/deepstream/deepstream-7.0/app/fast_api_server/tmp/05503eff/configs/model_config.txt failed, reason: Trying to update model too frequently

So I want to know whether, is it okay to have this warning?, Or is it not good to set these config file path properties for nvinfer element/others in the runtime?. Do we have any breaking issues if we run it for a long time let’s say if we have to process 1000 videos?

Would be nice to have a answer that will help to decide whether to use this approach(since this is working) or not? Thanks in advance

the log means “Check if a model is loaded but not yet being used for inferencing.”. seems that the model was updated frequently. nvinfer plugin and low-level lib are opensource. please refer to DsNvInferImpl::loadModel.
nvinfer supports OTA functionality. Test5 app can update the models in the running pipeline on-the-fly.

Thanks for the information. And here I am actually trying to load the same model. But the model path is specified in the model config file and I am changing the model config path on runtime. Is this warning harmful for that particular scenario?

As the code DsNvInferImpl::loadModel shown, this log is an abnormal case. The model will not be loaded because the function will return in advance.

Thanks. And where can I find the code for nvinfer DsNvInferImpl::loadModel ?

\opt\nvidia\deepstream\deepstream\sources\gst-plugins\gst-nvinfer\gstnvinfer_impl.cpp. the function includes the following code.

    notifyLoadModelStatus (ModelStatus {
        NVDSINFER_UNKNOWN_ERROR, modelPath, "Trying to update model too frequently"}
1 Like

Thanks

Sorry for the late reply, Is this still an DeepStream issue to support? Thanks!

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.