How to speed up model tensorrt conversion on Xavier

Hi, Nvidia team,

I’m trying to convert a YOLOX S model to tensorrt and deploy on AGX Xavier. However, since the Xavier has a very slow CPU, the conversion process using trtexec can be very very slow (the full process may take more than 30min). Do you have any tips on how can I speed up this process?

Orin also has similar challenges though Orin’s CPU can be faster.

For tensorrt, is there a way I can do the heavy lifting conversion work on an X86 machine with a GPU to speed up the process or there is no way to avoid long waiting trying to convert the model on device?

Dear @chenjie2,
The TRT model preperation is one time. We expect to re use the prepared model to perform inference. The TRT model has to be prepared on target on which we want to perform inference. TRT models prepared on x86 can not loaded on Tegra.

I see. So basically there is not way we can accelerate the tensorrt model conversion on tegra.

Dear @chenjie2,
there is not way we can accelerate the tensorrt model conversion on tegra.

Yes

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.