TensorRT runtime engine compatibility between python API and C++ API

Description

Can I create a TRT runtime engine (serialized) using the python API and then use then use this same engine (so deserialzed it + infer) to perform an inference using the C++ API?

Environment

TensorRT Version: 7.1
GPU Type: Xavier
Nvidia Driver Version: 460
CUDA Version:
CUDNN Version:
Operating System + Version: ubuntu 18.04
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Relevant Files

Steps To Reproduce

Hi,
Please check the below link, as they might answer your concerns

Thanks!

Ok thanks for sharing this documentation. Actually, I didn’t really find an answer to my question inside.

My concerns is: I have already generated an .engine file using the python API. Is it possible to deserialize this .engine file using the C++ API and use this deserialized engine to perform the inference using the C++ API ? Or for using an .engine file with the C++ API, should I also generate the .engine file using the C++ API?

Best regards,

@thomas.boulay,

We can use Engine file across python/C++. So above operation you’re looking is supported.

Thank you.