Issue regrading size and type of the model during conversion from ONNX to TRT

Hi Nvidia Support Team,

I am trying to convert our Custom model from ONNX to tensorrt Model in Jetson Nano. The conversion is happening without errors, but after the Conversion the size and type of the TRT Model being generated in Jetson Nano are completely different when I am converting in my Local System.

→ Size of the ONNX Model= 251 MB
->Size of the TRT Model being generated in Jetson Nano= 605 MB[Type: STL 3D Model(binary)]
→ Size of the TRT model being generated in my Local System= 250MB[Type: binary]

What is the reason behind the same and how to avoid this?

The below attached image describes the Information of the TRT Model being generated in Jetson Nano.

Hi,

TensorRT optimizes a model based on the GPU architecture and software version.
In general, it tests all the possible algorithm, pick a fast one, and serialized the requirement data into an engine.
So it may be lots of difference with the same model on the different platform.

A possible way to improve is to set a larger workspace value that allows TensorRT use other algorithm on Nano.

Thanks.