I’m looking into the Algorithm Selection feature of TensorRT to have reproducible builds. My understanding is that I would generate a file, similar to the INT8 calibration cache, that specifies which implementations and tactics to use for each layer next time the engine is built.
From a pure functional (i.e. disregarding optimal performance considerations) point of view, can the algorithm selection be reused across GPUs? I am well aware that this might lead to engines that are not optimal (slower), but I am fine with that.
For example, can I:
-Create an Algorithm Selection cache on a GPU with SM 7.5, and reuse it on a different GPU with SM7.0?
-Create an Algorithm Selection cache on a GPU with SM 7.2, and reuse it on a different GPU with SM7.5?
To put it differently - do all GPUs with the same major SM version (e.g. 7.x) support the same TensorRT implementations/tactics?