• Hardware Platform (Jetson / GPU) - Tesla T4 GPU
• DeepStream Version - 7.0
• TensorRT Version - 8.6.1.6
• NVIDIA GPU Driver Version (valid for GPU only) - 535.171.04
• CUDA - 12.2
• Issue Type( questions, new requirements, bugs) - Question
I had a requirement where I need to set the model config file path property in runtime for nvinfer elment continously. For every video I am processing, I will be having model config in a different file path, SO I need to set that each time before processing a video at runtime.
So I modified the code and It is working pretty good as expected, But I am having this warning sometimes
deepstream-2 | 0:09:44.615318721 1 0x791ceb0520b0 WARN nvinfer gstnvinfer_impl.cpp:346:notifyLoadModelStatus:<primary-nvinference-engine> warning: [UID 1]: Load new model:/opt/nvidia/deepstream/deepstream-7.0/app/fast_api_server/tmp/05503eff/configs/model_config.txt failed, reason: Trying to update model too frequently
So I want to know whether, is it okay to have this warning?, Or is it not good to set these config file path properties for nvinfer element/others in the runtime?. Do we have any breaking issues if we run it for a long time let’s say if we have to process 1000 videos?
Would be nice to have a answer that will help to decide whether to use this approach(since this is working) or not? Thanks in advance