• Hardware Platform: GPU
• DeepStream Version: 5.0.0
• TensorRT Version: 188.8.131.52
• NVIDIA GPU Driver Version (valid for GPU only): 460.32.03
Hi, just started using DS SDK and facing some questions right now:
Question: Using deepstream-app i set up a config based on the objectDetection_SSD example. How could I measure the time needed for an inference? Not sure about terminology here right now but what I mean is the time it took for my model being executed.