Please provide complete information as applicable to your setup.
Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) Jetson AGX
• DeepStream Version 7.0
**• JetPack Version (valid for Jetson only) 6.0 GA
**• TensorRT Version 8.6
**• NVIDIA GPU Driver Version (valid for GPU only) 540.3.0
**• Issue Type( questions, new requirements, bugs) questions
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
How to check inference time for a frame when using deepstream-test5-app
Hi,
We have an open-source tool called gst-shark for GStreamer profiling that you can use to measure interlatency and obtain the nvinfer
processing time.
Installation:
sudo apt install -y libgraphviz-dev
git clone git@github.com:RidgeRun/gst-shark.git
cd gst-shark/
meson build --prefix /usr/
ninja -C build
sudo ninja -C build install
To see all the available tracers run:
gst-inspect-1.0 | grep tracers
sharktracers: buffer (GstTracerFactory)
sharktracers: bitrate (GstTracerFactory)
sharktracers: queuelevel (GstTracerFactory)
sharktracers: framerate (GstTracerFactory)
sharktracers: scheduletime (GstTracerFactory)
sharktracers: interlatency (GstTracerFactory)
sharktracers: proctime (GstTracerFactory)
sharktracers: graphic (GstTracerFactory)
sharktracers: cpuusage (GstTracerFactory)
coretracers: latency (GstTracerFactory)
coretracers: log (GstTracerFactory)
coretracers: rusage (GstTracerFactory)
coretracers: stats (GstTracerFactory)
coretracers: leaks (GstTracerFactory)
To use it, enable the tracers with the GST_DEBUG
variable and then select the desired tracers using GST_TRACERS
. Below is an example of how to use gst-shark to measure interlatency in the deepstream-test5-app
:
GST_DEBUG=*TRACE*:9 GST_TRACERS="interlatency" deepstream-test5-app ...
In DeepStream videos are inferenced in “batch” but not frame by frame. What is your “inference time” refer to? Are the “preprocessing” and “postprocessing” included?
How to print this message to file?
Only batch inferencing time can be got. In gst-nvinfer, there is no log with inferencing time. There are two ways to get the time.
- Add latency calculation code in NvDsInferContextImpl::queueInputBatch() function in /opt/nvidia/deepstream/deepstream/sources/libs/nvdsinfer/nvdsinfer_backend.cpp. Rebuild libnvds_infer.so and run your app with the new libnvds_infer.so.
- Use Nsight tool to profiling the app. DeepStream SDK FAQ - Intelligent Video Analytics / DeepStream SDK - NVIDIA Developer Forums
1、There is a topic about the first .
I modify the NvDsInferContextImpl::queueInputBatch() in the nvdsinfer_context_impl.cpp
and rebuild the so and copy the lib directory
But I don’t find the print
The second in this topic I don’t find any useful infomation
The DeepStream directory is /opt/nvidia/deepstream/deepstream/lib
With these exports you can enable the output files and set the location:
export GST_SHARK_CTF_DISABLE=TRUE
export GST_SHARK_LOCATION=/tmp/profile