Cross-compilation TensorRT serialized engines

Hi!

My question regards generating serialized TRT engines. Is there a way to build them by cross-compiliation process?
I know that, they’re optimized for a specific platform, but as we build them on Jetson Nano, it takes quite a long time. We thought about cross-compiling using PC, but is it possible?

Regards,
Piotrek

Hi,

Serialized engines are not portable across platforms or TensorRT versions. Engines are specific to the exact GPU model they were built on (in addition to the platforms and the TensorRT version).
https://docs.nvidia.com/deeplearning/sdk/tensorrt-archived/tensorrt-710-ea/tensorrt-developer-guide/index.html#serial_model_c

Thanks