[TensorRT] ERROR: FAILED_ALLOCATION: basic_string::_S_construct null not valid

Linux - 16.04
GPU type - RTX 2080Ti
Nvidia driver version - 435.21
CUDA version - 10.0
CUDNN version - 7.6.5
Python version - 3.6

TensorRT version - 7.0.0.11

Hi,

I am trying to save the built engine from tensorRT by using engine.serialize() API but I am getting the following error:

“[TensorRT] ERROR: FAILED_ALLOCATION: basic_string::_S_construct null not valid”

Return type of engine.serialize() is None.

The operations in onnx are:
(Unnamed Layer* 0) [Convolution]
(Unnamed Layer* 1) [PluginV2DynamicExt]
(Unnamed Layer* 2) [Activation]
(Unnamed Layer* 3) [Convolution]
(Unnamed Layer* 4) [Convolution]
(Unnamed Layer* 5) [PluginV2DynamicExt]
(Unnamed Layer* 6) [Activation]
(Unnamed Layer* 7) [Convolution]
(Unnamed Layer* 8) [Convolution]
(Unnamed Layer* 9) [PluginV2DynamicExt]
(Unnamed Layer* 10) [Activation]
(Unnamed Layer* 11) [Convolution]
(Unnamed Layer* 12) [PluginV2DynamicExt]
(Unnamed Layer* 13) [Activation]
(Unnamed Layer* 14) [Convolution]
(Unnamed Layer* 15) [PluginV2DynamicExt]
(Unnamed Layer* 16) [ElementWise]
(Unnamed Layer* 17) [Convolution]
(Unnamed Layer* 18) [PluginV2DynamicExt]
(Unnamed Layer* 19) [Activation]
(Unnamed Layer* 20) [Convolution]
(Unnamed Layer* 21) [PluginV2DynamicExt]
(Unnamed Layer* 22) [ElementWise]
(Unnamed Layer* 23) [Convolution]
(Unnamed Layer* 24) [PluginV2DynamicExt]
(Unnamed Layer* 25) [Activation]
(Unnamed Layer* 26) [Convolution]
(Unnamed Layer* 27) [PluginV2DynamicExt]
(Unnamed Layer* 28) [ElementWise]
(Unnamed Layer* 29) [Convolution]
(Unnamed Layer* 30) [PluginV2DynamicExt]
(Unnamed Layer* 31) [Activation]
(Unnamed Layer* 32) [Convolution]
(Unnamed Layer* 33) [PluginV2DynamicExt]
(Unnamed Layer* 34) [ElementWise]
(Unnamed Layer* 35) [Convolution]
(Unnamed Layer* 36) [PluginV2DynamicExt]
(Unnamed Layer* 37) [Activation]
(Unnamed Layer* 38) [Convolution]
(Unnamed Layer* 39) [PluginV2DynamicExt]
(Unnamed Layer* 40) [ElementWise]
(Unnamed Layer* 41) [Convolution]
(Unnamed Layer* 42) [PluginV2DynamicExt]
(Unnamed Layer* 43) [Activation]
(Unnamed Layer* 44) [Convolution]
(Unnamed Layer* 45) [PluginV2DynamicExt]
(Unnamed Layer* 46) [ElementWise]
(Unnamed Layer* 47) [Convolution]
(Unnamed Layer* 48) [PluginV2DynamicExt]
(Unnamed Layer* 49) [Activation]
(Unnamed Layer* 50) [Convolution]
(Unnamed Layer* 51) [PluginV2DynamicExt]
(Unnamed Layer* 52) [ElementWise]
(Unnamed Layer* 53) [Convolution]
(Unnamed Layer* 54) [PluginV2DynamicExt]
(Unnamed Layer* 55) [Activation]
(Unnamed Layer* 56) [Convolution]
(Unnamed Layer* 57) [PluginV2DynamicExt]
(Unnamed Layer* 58) [ElementWise]
(Unnamed Layer* 59) [Deconvolution]
(Unnamed Layer* 60) [Padding]
(Unnamed Layer* 61) [Convolution]
(Unnamed Layer* 62) [PluginV2DynamicExt]
(Unnamed Layer* 63) [Activation]
(Unnamed Layer* 64) [Deconvolution]
(Unnamed Layer* 65) [Padding]
(Unnamed Layer* 66) [Convolution]
(Unnamed Layer* 67) [PluginV2DynamicExt]
(Unnamed Layer* 68) [Activation]
(Unnamed Layer* 69) [Convolution]
(Unnamed Layer* 70) [Activation]

code:

def build_engine_onnx(model_file):
    EXPLICIT_BATCH = 1 << (int)(trt.NetworkDefinitionCreationFlag.EXPLICIT_BATCH)
    with trt.Builder(TRT_LOGGER) as builder, builder.create_network(EXPLICIT_BATCH) as network, trt.OnnxParser(network, TRT_LOGGER) as parser:
        builder.max_workspace_size = 1 << 20
        builder.max_batch_size = 1
        # builder.fp16_mode = True
        builder.strict_type_constraints = True
        # Load the Onnx model and parse it in order to populate the TensorRT network.
        with open(model_file, 'rb') as model:
            if not parser.parse(model.read()):
                print(parser.get_error(0))
        return builder.build_cuda_engine(network)

with build_engine_onnx(onnx_model_path) as engine:
     print('Completed')
     inputs, outputs, bindings, stream = allocate_buffers(engine)
         with engine.create_execution_context() as context:
              np.copyto(inputs[0].host, img.ravel().astype(np.float32))
              t = time.time()
              [output] = do_inference(context, bindings, inputs, outputs, stream)
              print('Inference Time: ', (time.time() - t))
              output = (output + 1) / 2
              out = np.reshape(output, (3, 1024, 1024), order='C')
              out = np.transpose(out, (1, 2, 0))
              plt.imsave('./result_1.png', out)
              print('Engine name: ', engine.name)
######################################################################################################
# Code is working properly till here, if I execute beyond this then give the error: 
# "[TensorRT] ERROR: FAILED_ALLOCATION: basic_string::_S_construct null not valid"
              [b]with open("./serialized_model/sample.engine", "wb") as f:
                   f.write(engine.serialize())[/b]

Hi,

Can you share a sample ONNX model that reproduces this issue so I can further debug?