The engine plan file is generated on an incompatible device expecting compute 6.1 got compute 8.6,please rebuild

Error:the engine plan file is generated on an incompatible device expecting compute 6.1 got compute 8.6,please rebuild.
engine.cpp::nvinfer1::rt::deserializeEngine::934] Error code 2:Internal Error (Assertion engine->deserialize (start,size,allocator,runtime) failed).

1.In Win10 environment, I used a graphics card RTX3090, which has 8.6 arithmetic power, and I trained the model and generated an engine model from the generated pt model via tensorrt to facilitate accelerated inference.
But I deployed it on RTX1060, which has 6.1 arithmetic power. running exe gives me the same error as posted above.
I have kept the cuda version all the time, and the training and deployment computers are on version 12 of cuda.
2.How should I deal with this error, can I make it compatible with RTX1060 by setting the arithmetic power in the process of generating the engine with tenosorrt? Or do I have to generate it again on the RTX1060?

Hi,

The generated TensorRT engine files are not portable across platforms or TensorRT versions. Plans are specific to the exact GPU model they were built on (in addition to the platforms and the TensorRT version) and must be rebuilt on the specific GPU in case you want to run them on a different GPU.

TensorRT 8.6 and later versions support hardware and version compatibility.
Please refer to the following document, which may help you resolve the above issue:

Thank you.

1 Like

I actually tested it. Accelerated inference compatibility is still there, I tested it on 3060 as long as the cuda version is the same, it is possible to run. Because the 3060 and 3090 have the same arithmetic power, both are 8.6.
I hope to help others by putting the record here.
Thank you.