Hello, developers, I am having problems deploying caffe models using TensorRT.
Question 1: The engine file size is inconsistent on the same computer, and the difference between different computers is bigger. I don’t know if this situation is normal.
Question 2: I can perform the reasoning normally, but the correctness of the reasoning is only 71%. I read the picture through opencv and send it to the reasoning engine. I hope that someone who knows the solution can help me answer it. Thank you very much.

The memory usage is dependent on the device and kernel used to optimize the model based on precision and other factors.
To determine the amount of memory a model will use, please below link question How do I determine how much device memory will be required by my network?

In this case it might be possible that more optimized kernel is being used to optimize the performance of model in TRT 7 at cost of additional memory.

You can try to changing the max workspace size while creating engine to reduce the memory consumption, but may degrade performance when setting it too small.