detectnet performance

We tested TensorRT 5.0.3 detectnet on Agx Xavier and got two different FPS value, the new one is Network FPS and the previous one is FPS.
May we know the differences between the two different types FPS, due to the two FPS value are quite different?

We tested our own detectnet model and got “Network FPS” around 30 and previous “ FPS” around 15 FPS.
Which one should we refer to?
If we refer to 15 FPS, how we can do to improve the performance?

We also built coco-dog model with different pad image size and got different Network FPS.
Pad Image 640 *640 – Network FPS = 60-70FPS (However, the coco-dog from Jetson-inference, NetWork FPS = 60-90 FPS)
Pad Image 1100 * 1100 – Network FPS = 20-21FPS

Does Pad Image affect to the performance like this?
If we still need to use big size Pad Image, how we do to improve the performance?

Thank you for any suggestions.

Hi,

The network FPS represents the speed of inferencing a model.
The FPS is the overall performance, including camera input, pre-processing, post-precessing and display.

Image size will also affect the execution time.
The size is proposional to the total amount of calculation and will directly affect the performance.

For an optimized camera-based pipeline, it’s recommended to check our Deepstream library first:
https://developer.nvidia.com/deepstream-sdk

Thanks.

Hi AastaLLL,

Thank you for your information.