Hi I had a problem deserializing the TensorRT engine built in RTX 3070 and running on 2080 SUPER.
The error was as follows:
TRT]: INVALID_CONFIG: The engine plan file is generated on an incompatible device, expecting compute 8.6 got compute 7.5, please rebuild.
Except for the gpu, everything including tensorrt, cuda version were same thanks to using same docker container
TensorRT Version: 220.127.116.11+cuda11.1
GPU Type: Deserialized : RTX 2080 SUPER / Serialized : RTX 3070
Nvidia Driver Version: 465.19.01
CUDA Version: 11.1
CUDNN Version: 8.0.5
Operating System + Version: Ubuntu 20.04
Python Version (if applicable): python 3.6.9
TensorFlow Version (if applicable): x
PyTorch Version (if applicable): 1.8.0+cu111
Baremetal or Container (if container which image + tag):
My question is, is it possible to serialize the engine file in for example compute compatability 8.6 and deserialize it in for example compute compatability 7.5 which is less than serialized environment?
My guess is that it could be possible if I build the engine file in higher compute compatability, it could deserialize it in lower compute compatability by giving specific compute arch, sm code and build.
For example, cmake … -DGPU_ARHCS=“75”
Thanks in advance,
Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)
- Exact steps/commands to build your repro
- Exact steps/commands to run your repro
- Full traceback of errors encountered