Is it possible to save and load networkdefinition in different device?

Description

I am working on deploy pytorch detection as tensorrt engine on tx2. It works fine on my Desktop. But installing pytorch and other dependency is not so easy (and not necessary).

I wonder that is it possible to feed the networkdefinition to Tx2 only. Then I can just build the engine with that definition?

Thanks

Environment

Relevant Files

Steps To Reproduce

Hi,

You can convert the Pytorch model to ONNX in your desktop and then pass the ONNX model to TX2 for further conversion to TRT engine.

Thanks

Thanks for reply.
My model contain multiple custom layers(IPluginV2DynamicExt). I don’t know how to convert them to onnx. Is it possible to add custom layer to onnx that can used in tensorrt?
By the way, I use GitHub - NVIDIA-AI-IOT/torch2trt: An easy to use PyTorch to TensorRT converter to create the engine.

Hi,

Since you have TensorRT plugin implementation already. Best approach will be to use torch2trt to create the engine.

Alternate approach is to create ONNX model using torch2onnx and based on the supported ops in ONNX parser add a custom plugin if required to generate the TRT engine.
https://pytorch.org/docs/stable/onnx.html

Thanks

OK … torch2trt seems better for me .
Thanks for answer. have a nice day.