Please provide complete information as applicable to your setup.
Please provide complete information as applicable to your setup. • Hardware Platform (Jetson / GPU) Jetson AGX • DeepStream Version 7.0
**• JetPack Version (valid for Jetson only) 6.0 GA
**• TensorRT Version 8.6
**• NVIDIA GPU Driver Version (valid for GPU only) 540.3.0
**• Issue Type( questions, new requirements, bugs) questions • How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
I run deepstream-test5 application, and want to get the infer time of gst-infer .
I only want to get the infer time of gst-infer .
Did you help me fix this problem?
please refer to this faq for Latency measurement. it will output latency in gst-nvinfer plugin.
nvinfer plugin is opensource. you also can add log to measure the infer time.
sorry for the late reply! the lowl-level inference function is m_BackendContext->enqueueBuffer of NvDsInferContextImpl::queueInputBatch/queueInputBatchPreprocessed. you can add logs to check.
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks