How to check inference time for a frame when using deepstream

How to measure inference time for a single frame? I’m using TLT pre-trained SSD model .
Essentially, I want to print 2 timestamps. One, before nvinfer is executed. Another, after nvinfer is executed.
Please show me where to do so in this script.

• DeepStream Version = 5.0
• JetPack Version = 4.4

You could add time stamp print before and after
sources/libs/nvdsinfer/nvdsinfer_context_impl.cpp::NvDsInferContextImpl::queueInputBatch
RETURN_NVINFER_ERROR(m_BackendContext->enqueueBuffer(backendBuffer,
*m_InferStream, m_InputConsumedEvent.get()),
“Infer context enqueue buffer failed”);

like this,

struct timeval start, stop;

gettimeofday(&start, NULL);

RETURN_NVINFER_ERROR(m_BackendContext->enqueueBuffer(backendBuffer,
                         *m_InferStream, m_InputConsumedEvent.get()),
    "Infer context enqueue buffer failed");

gettimeofday(&stop, NULL);
printf("time of infer takes: %ld us\n", (stop.tv_sec - start.tv_sec) * 1000000 + (stop.tv_usec - start.tv_usec));

recompile
export CUDA_VER=10.2
make -C libs/nvdsinfer/
backup original libnvds_infer.so under lib/
sudo cp libs/nvdsinfer/libnvds_infer.so lib/

1 Like

Thanks, it worked!
Also, for future reference, to make the code snippet complete: #include <sys/time.h>

Hi amycao,
Yes, this technique allows printing of inference time. Here’s the thing I want to do. Get this inference time in my custom DeepStream application.
Is this the only place where this time can be calculated?
Is it possible to get this in nvinfer’s src pad probe?

Regards
Mandar

Yes, you can.

It may not be appropriate to calculate m_BackendContext->enqueueBuffer(backendBuffer,
*m_InferStream, m_InputConsumedEvent.get() run time as inference time. since infer running in different cudastream, it’s asynchomous. it may not finish inference. you should use trtexec to get the inference time. you can find it under /usr/src/tensorrt/bin/