Gst-nvinfer functionality for On the Fly Model Update in DeepStream

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) Jetson Nano
• DeepStream Version 6.0
• JetPack Version (valid for Jetson only) 5.6
• TensorRT Version 8.0.1
• Issue Type( questions, new requirements, bugs) New requirements / question
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

My requirement / question is regarding the gst-nvinfer plugin. In the comments for the DsNvInferImpl class (gstinver_impl.h:191) it is explained that a model can be updated at runtime by setting “config-file-path”. This is the function that On the Fly Model Update is using in deepstream-app-5. On the corresponding page from the DeepStream 6 docs (and README for deepstream-app-5), it notes the assumption:

New model must have same network parameter configuration as of previous model (e.g. network resolution, network architecture, number of classes).

In the source code comments however, it only states that the input resolution, channels and the model type (Detection/Classification/Segmentation) should not change. It does not note the constraint of network architecture.

It turns out that in my application I need to change the model (preferably during runtime, but not necessarily) without changing input resolution or channels. The model should also always stay a detection model. In my current test application (it is based on deepstream-app-1), I can change the model during runtime using this functionality, but only when I don’t change the network architecture. I cannot change the network architecture (for example ResNet to FasterRCNN). When I try to, somewhere after the function NvDsInferContextImpl::buildModel() is called a segmentation fault occurs.

My question is whether the documentation on the DeepStream docs page is leading in this case, meaning that the functionality of updating “config-file-path” does have the constraint that you cannot change the network architecture. If so, is there another possibility to change the model architecture during runtime? Will this functionality be implemented soon? If not, what am I doing wrong in my case?

the code comments and the documentation are different, I am checking.

please test according to the document which is published. On the Fly Model Update — DeepStream 6.0 Release documentation
As the doc said, New model must have same network architecture.

Is there another possibility to change the model architecture during runtime? I could destroy the deepstream pipeline and create a new one but this seems really inefficient

no better method, currently change the model architecture during runtime is not supported.

Thanks. One more question: what does it mean to have the same network architecture? Will ResNet10 work with ResNet18? Will ResNet10 as caffemodel work with ResNet10 as ONNX? Will YOLOv3 as caffemodel work with ResNet34 as caffemodel?

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.