How to ensure/get portable across GPU platforms?

When using TensorRT to build engine, can we choose not perform GPU platform related optimization, so as to ensure that the serialized plan file can be used on different GPU platforms? In short, how to ensure/get portable across GPU platforms?
Is there any way to get portable across GPU platforms when optimizing model for inference? Looking forward to the answer…

Hi @zhaibo99,
The generated plan file is not portable across platforms, TRT versions and GPU model.
Please refer to the below link for the same.
https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html#work

Thanks!

Thanks, hope to support cross GPU platform porting in the future.