Faster inference in tensorrt model

I have an tensorrt engine. It is running fine, but the time of model inference fluctuates greatly. I want to know what factors lead to this result

Hi there @ludi1 and welcome to the NVIDIA developer forums!

I think your question is better asked in the TensorRT forums. If you don’t mind, I will move it over there.

Thanks!