Description
I have trained a simple mnist dataset with custom data in keras. Model is good and provides good output.
Later i generated onnx modle using this script
model.save(saved_model)
print(“training is completed:”)
onnx_model = keras2onnx.convert_keras(model, model.name, target_opset=8)
onnx_model_name = model_path + model_prefix + “.onnx”
print("model name is: ",model.name)
keras2onnx.save_model(onnx_model, onnx_model_name)
print(“Keras model and ONNX model has been stored.”)
Model is converted successfully.
When I run inference using tensorrt.
Engine is created succesfully and I get output as nan for all output vector.
My preprocess scriptis this:
cv::Mat resized;
cv::resize(inputImg, resized, cv::Size(28,28), 0, 0, cv::INTER_CUBIC);
cv::Mat img_float;
//resized.convertTo(img_float, CV_32FC1 ,1/255.0);
for (int i = 0 ; i < inputHeight ; ++i) {
for (int j = 0; j < inputWidth ; ++j) {
*hostData = resized.at(i,j) / 255.0f;
hostData++;
}
}
return hostData;
I tried with trtexec also, It give passed output. that means model is correct, there is some problem with the prepossess step i guess.
Could you please let me know the problem and solution for this.
Environment
TensorRT Version: 6.0.2
CUDA Version: 10.2
TensorFlow Version (if applicable): 1.14