I have a question about building an inference engine for TensorRT

Hi,
I have a question.

When building the Inference Engine from ONNX, DGX was used to save the data.
I tried to move the saved inference engine to Xavia jetson nano and run the inference, but
I got a None Type Error and the script did not run.

Do I need to build the inference engine in the actual device where I want to run the inference?
Also, if the build environment is not affected, how can I avoid the error?

I’m sorry to ask such a rudimentary question, but could you please answer it for me?

Thank you very much for your help.

Hi,

Since engine file is device dependent, please copy the ONNX model instead.
And create the engine file from ONNX format directly.

Thanks.

Hi, @AastaLLL
Thank you very much for your answer.
I guess it depends on the device.

I will copy the ONNX file to the device where I want to actually infer it, and then build and use the engine.

Thank you very much.