Inference time and model loading time

• Hardware Platform (Jetson / GPU) GPU A30
• DeepStream Version 7.0
• TensorRT Version 10.0.0.6
• NVIDIA GPU Driver Version (valid for GPU only) 535.104.12
• Issue Type( questions, new requirements, bugs) Questions

  • Is there any plugin that allows to know the inference time per object?

  • Is there a way to calculate the time it takes to load the model?
    Or would it have to be done manually, start - finish.

Do you want to know the latency of element ? there are two choises.
1.Refer this FAQ.

2.Use nvdslogger, you can refer deepstream-test3 for more information.

Do you mean the build engine or just the load engine? You need to modify the code yourself. It’s open source.
Please refer /opt/nvidia/deepstream/deepstream/sources/libs/nvdsinfer/nvdsinfer_model_builder.cpp

1 Like