Inference time increases in for loop

Hi, I am using this repo (https://github.com/wang-xinyu/tensorrtx/blob/master/) to convert a yolov5 network to .engine model.

I have an issue regarding inference time. If I follow this script:

I can see a slow but gradual increase in inference time. Is the solution in a for loop optimal?
Is it recommended to do another procedure to predict a number of images?

I am very beginner when it comes to optimising models with respect to inference time.

Thanks in advance

Hi @jnaranjo ,
TRT Forum should be able to assist better here.

Thanks

Perfect! Thanks for the reply!