Currently I have a C++ TRT pipeline which converts a PyTorch model to a TRT model. The saved model has the ‘.trt’ extension which can be used to run inference in a C++ environment.
Now I am wondering if a ‘.trt’ model made in C++ could be fed into the Python TRT API, and if so how?
Thanks in advance,
TensorRT Version: 7.0.0
GPU Type: 2070 Super
Nvidia Driver Version:
CUDA Version: 10.2
CUDNN Version: 7.6
Operating System + Version: Ubuntu 18.04
Python Version (if applicable): 3.6.9
TensorFlow Version (if applicable): /
PyTorch Version (if applicable): 1.6.0
Baremetal or Container (if container which image + tag): /