How to measure inference time for a single frame? I’m using TLT pre-trained SSD model .
Essentially, I want to print 2 timestamps. One, before nvinfer is executed. Another, after nvinfer is executed.
Please show me where to do so in this script.
• DeepStream Version = 5.0 • JetPack Version = 4.4
You could add time stamp print before and after
sources/libs/nvdsinfer/nvdsinfer_context_impl.cpp::NvDsInferContextImpl::queueInputBatch
RETURN_NVINFER_ERROR(m_BackendContext->enqueueBuffer(backendBuffer,
*m_InferStream, m_InputConsumedEvent.get()),
“Infer context enqueue buffer failed”);
Hi amycao,
Yes, this technique allows printing of inference time. Here’s the thing I want to do. Get this inference time in my custom DeepStream application.
Is this the only place where this time can be calculated?
Is it possible to get this in nvinfer’s src pad probe?
It may not be appropriate to calculate m_BackendContext->enqueueBuffer(backendBuffer,
*m_InferStream, m_InputConsumedEvent.get() run time as inference time. since infer running in different cudastream, it’s asynchomous. it may not finish inference. you should use trtexec to get the inference time. you can find it under /usr/src/tensorrt/bin/