Please provide complete information as applicable to your setup.
Please provide complete information as applicable to your setup. • Hardware Platform (Jetson / GPU) Jetson AGX • DeepStream Version 7.0
**• JetPack Version (valid for Jetson only) 6.0 GA
**• TensorRT Version 8.6
**• NVIDIA GPU Driver Version (valid for GPU only) 540.3.0
**• Issue Type( questions, new requirements, bugs) questions • How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
How to check inference time for a frame when using deepstream-test5-app
To use it, enable the tracers with the GST_DEBUG variable and then select the desired tracers using GST_TRACERS. Below is an example of how to use gst-shark to measure interlatency in the deepstream-test5-app:
In DeepStream videos are inferenced in “batch” but not frame by frame. What is your “inference time” refer to? Are the “preprocessing” and “postprocessing” included?
Only batch inferencing time can be got. In gst-nvinfer, there is no log with inferencing time. There are two ways to get the time.
Add latency calculation code in NvDsInferContextImpl::queueInputBatch() function in /opt/nvidia/deepstream/deepstream/sources/libs/nvdsinfer/nvdsinfer_backend.cpp. Rebuild libnvds_infer.so and run your app with the new libnvds_infer.so.