The engine plan file is generated on an incompatible device expecting compute 6.1 got compute 8.6,please rebuild

Error:the engine plan file is generated on an incompatible device expecting compute 6.1 got compute 8.6,please rebuild.
engine.cpp::nvinfer1::rt::deserializeEngine::934] Error code 2:Internal Error (Assertion engine->deserialize (start,size,allocator,runtime) failed).

1.In Win10 environment, I used a graphics card RTX3090, which has 8.6 arithmetic power, and I trained the model and generated an engine model from the generated pt model via tensorrt to facilitate accelerated inference.
But I deployed it on RTX1060, which has 6.1 arithmetic power. running exe gives me the same error as posted above.
I have kept the cuda version all the time, and the training and deployment computers are on version 12 of cuda.
2.How should I deal with this error, can I make it compatible with RTX1060 by setting the arithmetic power in the process of generating the engine with tenosorrt? Or do I have to generate it again on the RTX1060?

Hi,

The generated TensorRT engine files are not portable across platforms or TensorRT versions. Plans are specific to the exact GPU model they were built on (in addition to the platforms and the TensorRT version) and must be rebuilt on the specific GPU in case you want to run them on a different GPU.

TensorRT 8.6 and later versions support hardware and version compatibility.
Please refer to the following document, which may help you resolve the above issue:

Thank you.

1 Like

I actually tested it. Accelerated inference compatibility is still there, I tested it on 3060 as long as the cuda version is the same, it is possible to run. Because the 3060 and 3090 have the same arithmetic power, both are 8.6.
I hope to help others by putting the record here.
Thank you.

The same error persists in TensorRT version 10.0.0b6 - I get the error: “[TRT] [E] 6: The engine plan file is generated on an incompatible device, expecting compute 7.0 got compute 8.6, please rebuild.”. I’m using the same Ubuntu, CUDA, Pyhon, onnx, toch, ultralytics (YOLO) and TensorRT versions on both machines.

From the documentation:

By default, TensorRT engines are only compatible with the type of device where they were built. With build-time configuration, engines can be built that are compatible with other types of devices. Currently, hardware compatibility is supported only for Ampere and later device architectures and is not supported on NVIDIA DRIVE OS or JetPack.

Note that even if you build for Ampere, you can expect lower performance:

When building in hardware compatibility mode, TensorRT excludes tactics that are not hardware compatible, such as those that use architecture-specific instructions or require more shared memory than is available on some devices. Thus, a hardware-compatible engine may have lower throughput and/or higher latency than its non-hardware-compatible counterpart. The degree of this performance impact depends on the network architecture and input sizes.

This means that for now for NVIDIA Tesla V-series GPUs the only solution is to rebuild on the same machine you’re planning to use it.

1 Like