Is convert models and inferring between the two machines risky?

I would like to ask, in the case of two machines with different GPU drivers, different cuda versions, different pytorch versions, different tensorrt versions, I do model transformation on the first machine, from pt–onnx–engine, and then bring the .engine file to the other machine for inference, is it feasible?
Now I have a scenario where I do training on a machine and do model transformations and then inference on edge boxes, my edge boxes are jetson nano, jetpack 4.5.1, cuda 10.2. I’m not sure if this is OK?

Hi,
UFF and Caffe Parser have been deprecated from TensorRT 7 onwards, hence request you to try ONNX parser.
Please check the below link for the same.

Thanks!

Thank you very much for your sharing. You may not understand my needs. My needs are to be performed on two machines, one for model conversion, that is, pytorch to onnx, and onnx to .engine. Another machine is used to receive .engine files for inference. I don’t know if this is feasible. Our hardware is limited. The machine used for model conversion may be RTX3090 or RTX3080TI. The machine for inference is nvidia jetson nano, jetpack is 4.5.1, and cuda is 10.2. I am not sure if this solution is reasonable. ,Is it feasible?

Currently, hardware compatibility is supported only for Ampere and later device architectures and is not supported on NVIDIA DRIVE OS or JetPack.

Please refer to the following for more details.