Export the model with TLT 3.0

it mentioned that Exporting the model decouples the training process from inference and allows conversion to TensorRT engines outside the TLT environment. does it mean that i can convert tlt model to trt engine in windows system? how can i convert tlt model to trt engine in windows system?
thank you.

End users can convert etlt (not tlt) model into trt engine.
There is not Windows version for tlt-converter.
But you can run tlt-converter in a Linux machine and then use the trt engine in Windows. Assuming that the Linux and Windows machines have the same CUDA/Cudnn/TensorRT version.
Refer to Can output of tlt-converter be used in Windows TensorRT app?

if the gpu is not the same between Linux machine and Windows machine, like 2080 and 2080Ti, can i use the trt engine in Windows.(the same CUDA/Cudnn/TensorRT version)
Thank you.

For 2080 and 2080Ti, it should be ok.

how about GTX 20xx and GTX 30xx? Is it enough just to keep the CUDA/Cudnn/TensorRT version consistent?
thank you!

It cannot work. Their compute capability are not the same.
For example,
Build trt engine in V100 , but run it in GTX1080, it does not work. Or If you generate trt engine for an NVIDIA P4 (compute capability 6.1) ,you can’t use that engine on an NVIDIA Tesla V100 (compute capability 7.0).

OK!and how about gtx 2080 and gtx 1650? Their compute capability both are 7.5.

I think it can work. Please try. Make sure the Linux and Windows machines have the same CUDA/Cudnn/TensorRT version.

fine, thank you!