Can I used model saved by torch2trt in a C++ environment?

Hi,
I used torch2trt to save a model, i.e. in the same way of the example in the readme in the repository, i.e:
torch.save(model_trt.state_dict(), ‘alexnet_trt.pth’)
Can I used this model in a C++ environment using trt?

Thanks,
Avi

Hi,

The serialized TensorRT engine can be reloaded with TensorRT C++ API directly.
https://docs.nvidia.com/deeplearning/tensorrt/api/c_api/classnvinfer1_1_1_i_runtime.html#abbcb22a6b2a2a77174e43e6f6bcc2fd1

nvinfer1::ICudaEngine* engine = infer->deserializeCudaEngine(engine_stream, engine_size, pluginFactory);

However, please noticed that TensorRT engine file is not portable.
It cannot be used cross different platform or software version.

Thanks.