Preload deepstream model

is there a way to pre-load the model , We would be starting inferencing from the weburl .

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

GPU: A100
deepstream version 5.1
ternsorRT 8.1

i need to improve the latency . Also if any config changes that can used please let me know . ,

Can you try sample in /opt/nvidia/deepstream/deepstream-6.0/sources/apps/sample_apps/deepstream-test1? It only build engine in the first time.

it cant be implemented on deepstream 5.1 ??

Yes.

can you tell me how … ?

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.