How to manually calculate FPS instead of GetNetworkFPS

Hi

We have run the detectnet application on Jetson nano and we are getting 30-32FPS. Now we have written our own custom opencv code which uses yolov5s. We can see there is not much lag or delay in inferencing but when calculating the FPS, we are only getting around 6-7FPS, which actually doesn’t make sense as video is getting processed fine without any lag. So FPS should be around 20-25FPS.

When tested the same code on PC, FPS looks fine but on Nano it drops down while video inferencing is happening without any lag.

I understand that GetNetworkFPS is calculating the time GPU took to inferenced the frame but our code is only using CPU to calculate the FPS and thus our FPS is very low while video inferencing is fine. Is this understanding correct.? If yes, how can we calculate FPS using GPU?

Thanks

Hi @ART97, GetNetworkFPS() uses the GPU time from inferencing that was profiled using cudaEvents:

https://github.com/dusty-nv/jetson-inference/blob/6bf94f753c727ea50f256fdec5fbe74bee540773/c/tensorNet.h#L635

If you are using Python, I believe that pyCUDA library has APIs available for using cudaEvents from Python.

Alternatively, are you using CPU timer to calculate the FPS of your entire processing loop, or just the YOLO inferencing?

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.