Wrong inference with Tensorrt 2.1 with sampleMNISTAPI with own Keras/Theano trained model

Hello All,

I trained(50 Epochs) a CNN on MNIST dataset in Theano/Keras in FP32. I saved the model.json and weights.h5. Then, I converted the weights.h5 into ‘mnistapi.wts’ file required by sampleMNISTAPI.cpp. I created the same model in Tensorrt in function CreateMNISTEngine() in sampleMNISTAPI.cpp file. Everything compiled but when I try to run inference on the .pgm images given in data/mnist/ folder, the inference is not correct. However, the sampleMNISTapi given by Tensorrt always gives correct output.

I am wondering if by any chance my weights are not stored properly. I dumped the weights in format . I suspected I dumped the weight matrix row-wise in .wts file, but I tried column-wise too, still the inference is not correct.

I don’t understand what is difference between conv1weights and conv1filter , ip1filter and ip1weights etc. in original mnistapii.wts file. The network definition doesn’t read conv1weights anywhere in sampleMNISTAPI.cpp. It Only uses conv1filter ,conv1bias ,ip1filter and ip1bias etc and not conv1weights etc.

Can any of you give me any leads to this problem?

Thanks

Hi, have you found the solution of this problem? I am confused about the difference between conv1weights and conv1filter, too. I guess they are alias of each other, but after looking into mnistapi.wts, I found that the value of ip2filter and ip2weights are different, so the suggestion could be wrong.