Hello All,
I have a model I have trained on Keras. I have converted the saved keras model to onnx using keras2onnx package in python. Later I use following code to convert the model to TensorRT engine:
EXPLICIT_BATCH = 1 << (int)(trt.NetworkDefinitionCreationFlag.EXPLICIT_BATCH)
with trt.Builder(TRT_LOGGER) as builder, builder.create_network(EXPLICIT_BATCH) as network, trt.OnnxParser(network, TRT_LOGGER) as parser:
print("FP16" , builder.platform_has_fast_fp16)
if not os.path.exists(onnx_model_file):
print('ONNX file {} not found.'.format(onnx_model_file))
exit(0)
print(network.num_layers)
with open(onnx_model_file, 'rb') as model:
if not parser.parse(model.read()):
for error in range(parser.num_errors):
print(parser.get_error(error))
print('Beginning ONNX file parsing')
parser.parse(model.read())
#print(network.num_layers)
network.mark_output(network.get_layer(network.num_layers - 1).get_output(0))
engine = builder.build_cuda_engine(network)
with open(engine_file, "wb") as f:
f.write(engine.serialize())
Running the code gives me the following error:
FP16 False
Loading ONNX file from path model.onnx...
0
In node 1 (importModel): INVALID_GRAPH: Assertion failed: tensors.count(input_name)
Beginning ONNX file parsing
[TensorRT] INFO: Detected 1 inputs and 1 output network tensors.
I am getting number of layers as zero. The onnx model looks OK when inspecting with Netron with correct input and output.
I also use following code to convert the Keras model to ONNX model:
with open(model_json_fname, 'r') as f:
model = model_from_json(f.read())
model.load_weights(model_fname)
temp_model_file = 'model.onnx'
onnx_model = keras2onnx.convert_keras(model, model.name)
keras2onnx.save_model(onnx_model,temp_model_file)
It is possible to force keras2onnx to use either Keras or tf.keras. I have tried both with same error.
TRT Version = 6.0.1.5
Cuda Version = 10.1
Driver Version: 440.64.00
Card = GTX1050
Python Version = 3.5.2
OS Version = Ubuntu16.04
Onnx Version = 1.6.0
Keras Version = 2.2
TF Version = 1.15
keras2onnx = 1.6.5
*
*