I have converted the onnx model into a trt file, and the test trt file keeps reporting errors:
[07/02/2022-09:02:28] [TRT] [I] [MemUsageChange] Init CUDA: CPU +363, GPU +0, now: CPU 426, GPU 5533 (MiB)
[07/02/2022-09:02:30] [TRT] [I] Loaded engine size: 1148 MiB
[07/02/2022-09:02:32] [TRT] [E] 1: [pluginV2Runner.cpp::load::290] Error Code 1: Serialization (Serialization assertion creator failed.Cannot deserialize plugin since corresponding IPluginCreator not found in Plugin Registry)
[07/02/2022-09:02:32] [TRT] [E] 4: [runtime.cpp::deserializeCudaEngine::50] Error Code 4: Internal Error (Engine deserialization failed.)
----------type:<class ‘NoneType’>
Traceback (most recent call last):
File “text_engine.py”, line 78, in
for idx in range(engine.num_bindings):
AttributeError: ‘NoneType’ object has no attribute ‘num_bindings’
error reporting section:
logger = trt.Logger(trt.Logger.INFO)
with open(“/home/upre/drz/onnx_to_engine_test/solov2.trt”, “rb”) as f, trt.Runtime(logger) as runtime:
engine=runtime.deserialize_cuda_engine(f.read())
Environment
TensorRT Version: 8.2.1.8 GPU Type: Nvidia Driver Version: CUDA Version: 10.2 CUDNN Version: Operating System + Version: jetson nx + package4.6.1 Python Version (if applicable): 3.6 TensorFlow Version (if applicable): PyTorch Version (if applicable): 1.8 Baremetal or Container (if container which image + tag):
Hi,
Please refer to below links related custom plugin implementation and sample:
While IPluginV2 and IPluginV2Ext interfaces are still supported for backward compatibility with TensorRT 5.1 and 6.0.x respectively, however, we recommend that you write new plugins or refactor existing ones to target the IPluginV2DynamicExt or IPluginV2IOExt interfaces instead.
Please refer to the following doc for supported operators in the TensorRT.
In case you use any unsupported operation in your model, you may need to implement a custom plugin for it.