Is dynamic nvinfer possible?

• Hardware Platform (Jetson / GPU) Tesla T4
• DeepStream Version 6.0
• TensorRT Version 8.0.0.1
• NVIDIA GPU Driver Version (valid for GPU only) 470.63.01

Am I able to change nvinfer’s config file dynamically, while the pipeline is playing or paused? When I attempt to do this, I get output that appears the new model has been loaded, but every other indication (the nvinfer’s property value for config-file-path, as well as its metadata output) tells me that the old model is still being used.

In the end, I want to make different models available to clients on a server that will be streamed to over udp/rtsp. Running an http server that launches the deepstream pipeline on request seems like the most obvious way to do this, but is there another method more readily available or more fitting with the available infrastructure of deepstream itself?

Can you refer below.
On the Fly Model Update — DeepStream 6.0.1 Release documentation (nvidia.com)

Ah. I’ve been trying to actually load different models entirely: a different config file path, different architecture, different number of classes. Seems to not be supported.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.