Output shows nan with sample Mnist on custom trained (keras) onnx converted


I have trained a simple mnist dataset with custom data in keras. Model is good and provides good output.
Later i generated onnx modle using this script

print(“training is completed:”)
onnx_model = keras2onnx.convert_keras(model, model.name, target_opset=8)
onnx_model_name = model_path + model_prefix + “.onnx”
print("model name is: ",model.name)
keras2onnx.save_model(onnx_model, onnx_model_name)
print(“Keras model and ONNX model has been stored.”)

Model is converted successfully.

When I run inference using tensorrt.
Engine is created succesfully and I get output as nan for all output vector.
My preprocess scriptis this:

cv::Mat resized;
cv::resize(inputImg, resized, cv::Size(28,28), 0, 0, cv::INTER_CUBIC);
cv::Mat img_float;
//resized.convertTo(img_float, CV_32FC1 ,1/255.0);
for (int i = 0 ; i < inputHeight ; ++i) {
for (int j = 0; j < inputWidth ; ++j) {
*hostData = resized.at(i,j) / 255.0f;
return hostData;

I tried with trtexec also, It give passed output. that means model is correct, there is some problem with the prepossess step i guess.
Could you please let me know the problem and solution for this.


TensorRT Version: 6.0.2

CUDA Version: 10.2

TensorFlow Version (if applicable): 1.14

Hi @god_ra,
Request you to share your ONNX model.


This is custom trained:
keras_model_at_30_epochs.onnx (446.9 KB)
this is standard model:
model.onnx (25.8 KB)

test images are mnist images itself.

@AakankshaS @AastaLLL,
Any solution for this problem?
Even standard onnx model gives incorrect output.

Hi @god_ra ,
Can you please try using the latest trt release?
If the issue still occurs, please share the verbose logs.