Algorithm Selection - reusable across GPUs?

Hi,

I’m looking into the Algorithm Selection feature of TensorRT to have reproducible builds. My understanding is that I would generate a file, similar to the INT8 calibration cache, that specifies which implementations and tactics to use for each layer next time the engine is built.

Question
From a pure functional (i.e. disregarding optimal performance considerations) point of view, can the algorithm selection be reused across GPUs? I am well aware that this might lead to engines that are not optimal (slower), but I am fine with that.

For example, can I:
-Create an Algorithm Selection cache on a GPU with SM 7.5, and reuse it on a different GPU with SM7.0?
-Create an Algorithm Selection cache on a GPU with SM 7.2, and reuse it on a different GPU with SM7.5?

To put it differently - do all GPUs with the same major SM version (e.g. 7.x) support the same TensorRT implementations/tactics?

Hi @carlosgalvezp,

Well, you can try, but that’s not guaranteed. Are you doing this to save build time for different GPUs ?

Thank you.

Hi,

No, I’m doing this to obtain reproducible builds, as explained initially. The problem is easier to manage if I can have one single file to version-control and maintain, instead of N files for each GPU architecture. Just like I can have 1 single file for the int8 calibration cache and re-use it across any GPU.

I have tested now cross-compatibility between CC 7.2 and CC 7.5 and indeed there’s no compatibility.

1 Like

Hi @carlosgalvezp,

Sorry for delayed response.
Hope following post might answer your query.

Also please note that serialized engines are not portable across platforms or TensorRT versions. Engines are specific to the exact GPU model they were built on (in addition to the platforms and the TensorRT version).

Thank you.