Problem loading TRT engine plan on another machine.

Hello.

I am testing to load the TRT engine plan.
When trying to load the TRT engine plan built with 1080 Ti, 2080 Ti, the following error occurred.

[2018-12-17 06: 41: 24 ERROR] The engine plan file is generated on an incompatible device, expecting compute 7.5 got compute 6.1, please rebuild.

In addition, when loading the TRT engine plan built with TensorRT 5.0.2.6 of Linux with TensorRT 5.0.4.3 of Windows, the following error occurred.

[2018-12-17 06: 48: 13 ERROR] The engine plan file is incompatible with this version of TensorRT, expecting 5.0.4.3got 5.0.2.6, please rebuild.

As a result, I found that the TRT engine plan depends on the GPU’s compute capability and the TensorRT version (OS).
If the TRT engine plan depends only on compute capability and TensorRT version, can I load the TRT engine plan that was built on the same TensorRT version (OS) on different machine if it is the same compute such as 1060, 1070, 1080?

Besides compute capability and TensorRT version, is there a factor that the TRT engine plan depends on? (Example: CPU, system memory, cuda, cudnn, etc.)

Thanks.

j-kim,

The TRT engine plan does depend on the compute capability and TRT versions. The cuda and cudnn versions shouldn’t be a factor. To change CPU architecture you’d need something that can run TRT of the same version as you built with.

KevinSchlichter,

Thank you for reply!

Thanks!

Hi there @NVES_K, I’m having some sort of the same problem. I installed cuda9 and used onnx and TRT4 to convert some models with success, then I migrated to cuda9 to test some TRT5 features, reverted back to Cuda9 and now I’m having trouble loading serialized networks created from Onnx.

To convert from onnx to trt I’m using GitHub - onnx/onnx-tensorrt: ONNX-TensorRT: TensorRT backend for ONNX

I tried also using a nvidia-docker (https://ngc.nvidia.com) [nvcr.io/nvidia/tensorrt 18.08-py2] image to rule out installation issues and the problem persists. I’m having to serializer a engine.trt but when loading the serialized engine into TRT4 I’m getting:

ERROR: The engine plan file is not compatible with this version of GIE, please rebuild.

But the TRT versions seem to be the same, for the nvidia-docker image the output of

dpkg -l | grep -i tensorrt

is:

ii  libnvinfer-dev              4.1.2-1+cuda9.0                       amd64        TensorRT development libraries and headers                                        
ii  libnvinfer-samples          4.1.2-1+cuda9.0                       amd64        TensorRT samples and documentation                                                
ii  libnvinfer4                 4.1.2-1+cuda9.0                       amd64        TensorRT runtime libraries                                                        
ii  python-libnvinfer           4.1.2-1+cuda9.0                       amd64        Python bindings for TensorRT
ii  python-libnvinfer-dev       4.1.2-1+cuda9.0                       amd64        Python development package for TensorRT
ii  python-libnvinfer-doc       4.1.2-1+cuda9.0                       amd64        Documention and samples of python bindings for TensorRT
ii  tensorrt                    4.0.1.6-1+cuda9.0                     amd64        Meta package of TensorRT

And on the host computer I get the same versions:

ii  libnvinfer-dev                                             4.1.2-1+cuda9.0                                       amd64        TensorRT development libraries and headers
ii  libnvinfer-samples                                         4.1.2-1+cuda9.0                                       amd64        TensorRT samples and documentation
ii  libnvinfer4                                                4.1.2-1+cuda9.0                                       amd64        TensorRT runtime libraries
ii  nv-tensorrt-repo-ubuntu1604-cuda9.0-ga-trt4.0.1.6-20180612 1-1                                                   amd64        nv-tensorrt repository configuration files
ii  python-libnvinfer                                          4.1.2-1+cuda9.0                                       amd64        Python bindings for TensorRT
ii  python-libnvinfer-dev                                      4.1.2-1+cuda9.0                                       amd64        Python development package for TensorRT
ii  tensorrt                                                   4.0.1.6-1+cuda9.0                                     amd64        Meta package of TensorRT

On host I’m having issues compiling onnx-tensorrt that’s why I used the docker image, but I’m really stuck on getting it serializing on a docker image and then loading with the host computer (even though the versions are pretty much the same).

Any help is appreciated!!