Runtime.deserialize_cuda_engine return a NoneType, how to fix ti?


runtime.deserialize_cuda_engine(serialized_engine) return a NoneType
I have search from the forum, however there is no suitable solution.


TensorRT Version:
GPU Type: GTX1660
Nvidia Driver Version: 465.89
CUDA Version: 11.3
CUDNN Version: 8.4.0
Operating System + Version: win10
Python Version (if applicable): 3.8
TensorFlow Version (if applicable):
PyTorch Version (if applicable): ‘1.11.0+cu113’
Baremetal or Container (if container which image + tag):

Code below:

logger = trt.Logger(trt.Logger.WARNING)
def build_engine(onnx_file_path):
    explicit_batch_flag = 1 << int(trt.NetworkDefinitionCreationFlag.EXPLICIT_BATCH)
    with trt.Builder(logger)as builder,builder.create_network(1 << int(trt.NetworkDefinitionCreationFlag.EXPLICIT_BATCH)) as network,builder.create_builder_config()as config:
        parser = trt.OnnxParser(network,logger)
        success = parser.parse_from_file(onnx_file_path)
        for idx in range(parser.num_errors):
        if not success:
        with open(onnx_file_path,'rb') as model:
            print('Beginning ONNX file parsing')
        print("Complete parsing of ONNX file")

        builder.max_batch_size = 1
        if builder.platform_has_fast_fp16:

        print('Building an engine...')
        '''last_layer = network.get_layer(network.num_layers-1)
        engine = builder.build_serialized_network(network,config)
        print("Completed create Engine")
    return engine

engine= build_engine("7_class_cuda.onnx")

with open("sample.engine","wb")as f:

with open("sample.engine","rb") as f:
    serialized_engine =

runtime = trt.Runtime(logger)
engine_ = runtime.deserialize_cuda_engine(serialized_engine)

Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:

  1. validating your model with the below snippet

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
2) Try running your model with trtexec command.

In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging

Seems like my onnx model is too large that can’t upload, as it is 119MB. So do you have any idea to upload my onnx model?

But it would be ok if I didn`t load the file from “sample.engine”(you can find this part in my code) and directly use the engine obtained from my build_engine function.


Which version of the TensorRT are you using? Please use the latest TensorRT version.
Also please refer to the following sample and make sure, your script is correct.

Thank you.

my tensorrt version is Actually, my problem can be simplified to I can`t read the engine from a binary data file.


Have you tried given sample code?

Sorry for replying to you so late, yes I have tried that code. But unfortunately, when I retried that code, it didn`t work—Same code and Same process, got a different result. @spolisetty

Could you please share with us the latest script along with the model and error logs for better debugging.


Based on the error it seems that your plugin library is not being registered for some reason.
Could you please check if your plugin is registered properly or force a plugins init using trt.init_libnvinfer_plugins(None,'')

How are you installing the TensorRT?

Also the following similar issue may help you,

Sorry for replying to you so late, I did try to use trt.init_libnvinfer_plugins(None,‘’), but program still break down sometime. @spolisetty


How are you installing the TensorRT? This issue may be due to incorrect setup as well.
Could you please try reinstalling by following the correct steps.

Thank you.