Here is my environment:
• Hardware Platform (Jetson TX2NX)
• DeepStream 5.1
• JetPack Version 4.5.1
• TensorRT Version 7.1.3
I am trying to print the inference time of my model, I followed the instructions in this post: https://forums.developer.nvidia.com/t/how-to-check-inference-time-for-a-frame-when-using-deepstream/141758/3?u=cleram
However, once I recompile libs/nvdsinfer/ and install the files and run my pipeline:
$ gst-launch-1.0 filesrc location=/opt/nvidia/deepstream/deepstream-5.1/samples/streams/sample_1080p_h264.mp4 ! \ qtdemux ! h264parse ! nvv4l2decoder ! m.sink_0 nvstreammux name=m batch-size=1 width=1280 height=720 ! \ vvideoconvert ! nvinfer config-file-path= config_infer.txt ! perf ! fakesink
I got a message telling me that it took approximately 4415 us to perform inference in my model. But if I use an external tool such as gst-perf I found that my pipeline is running at 12 FPS, which does not match with the message printed (without nvinfer I got a high FPS with gst-perf).
In the past I have tested to run the model using the Python TensorRT API and the inference takes around 83 ms.
I tested this approach with the same model in Jetpack 4.4 on TX2 and I had 83-90 ms of inference time.