Hello All,
I have a model I have trained on Keras. I have converted the saved keras model to onnx using keras2onnx package in python. Later I use following code to convert the model to TensorRT engine:

EXPLICIT_BATCH = 1 << (int)(trt.NetworkDefinitionCreationFlag.EXPLICIT_BATCH)
with trt.Builder(TRT_LOGGER) as builder, builder.create_network(EXPLICIT_BATCH) as network, trt.OnnxParser(network, TRT_LOGGER) as parser:
    print("FP16" , builder.platform_has_fast_fp16)
    if not os.path.exists(onnx_model_file):
        print('ONNX file {} not found.'.format(onnx_model_file))
    with open(onnx_model_file, 'rb') as model:
        if not parser.parse(
            for error in range(parser.num_errors):
        print('Beginning ONNX file parsing')
    network.mark_output(network.get_layer(network.num_layers - 1).get_output(0))
    engine = builder.build_cuda_engine(network)
    with open(engine_file, "wb") as f:

Running the code gives me the following error:

FP16 False
Loading ONNX file from path model.onnx...
In node 1 (importModel): INVALID_GRAPH: Assertion failed: tensors.count(input_name)
Beginning ONNX file parsing
[TensorRT] INFO: Detected 1 inputs and 1 output network tensors.

I am getting number of layers as zero. The onnx model looks OK when inspecting with Netron with correct input and output.

I also use following code to convert the Keras model to ONNX model:

with open(model_json_fname, 'r') as f:
         model = model_from_json(
temp_model_file = 'model.onnx'
onnx_model = keras2onnx.convert_keras(model,

It is possible to force keras2onnx to use either Keras or tf.keras. I have tried both with same error.

TRT Version =
Cuda Version = 10.1
Driver Version: 440.64.00
Card = GTX1050
Python Version = 3.5.2
OS Version = Ubuntu16.04
Onnx Version = 1.6.0
Keras Version = 2.2
TF Version = 1.15
keras2onnx = 1.6.5




Can you share a sample ONNX model that fails so I can take a look?

Here is the link to the model:


Sorry for the delay.

I just ran your model using TensorRT 7 and was able to parse it just fine. Do you mind upgrading to TensorRT 7?

Thanks for checking this. I have updated my TRT to 7 and I get following error:
[TensorRT] ERROR: Network has dynamic or shape inputs, but no optimization profile has been defined

Would you mind sharing your parser code or point out what should I change in my code?

Ah, sorry - I used trtexec --explicitBatch --onnx=model.onnx, which I believe creates a dummy profile for batch size 1, if not specified, by default for the sake of parsing.

You can also reference this thread for how to create profiles using trtexec if interested: TensorRT 7 ONNX models with variable batch size

For doing this with the Python API, you could reference this script:, which will create some default optimization profiles for various batch sizes.

It should work with something like:

python3 --explicit-batch --onnx=model.onnx 

You can tweak/use parts of the code for your needs, you definitely don’t have to use it as is.

Thanks I managed to make it work on trt7 as you suggested. Is there anyway I can make it work on trt6. I need to perform inference on Jetson products. I still get the same error on trt6.

trtexec helped me to understand the problem with trt6. I needed to save onnx with target_opset=8. Now I can parse the onnx in both TRT6 and 5.

I am not able to convert my weights.h5 to onnx format using the I have only a weights.h5 with me.Kindly help.
Thanks in advance

Hi @GalibaSashi,

For help converting your Keras model to ONNX, I recommend reaching out to keras2onnx team: