Hi,
I don’t get similar results with TensorRT and the trained tensorflow model!
For example for a batch of size 19, I get this from trained tensorflow model (3 classes):
[[6.0240232e-04 1.3105543e-03 9.9808705e-01]
[3.6373714e-03 6.9858474e-03 9.8937678e-01]
[2.9819757e-03 5.3826626e-03 9.9163532e-01]
[4.3369378e-03 7.5148577e-03 9.8814827e-01]
[8.7779770e-03 1.5823688e-02 9.7539830e-01]
[4.9051787e-03 8.8414559e-03 9.8625344e-01]
[7.6577030e-03 1.4652319e-02 9.7768992e-01]
[1.9303973e-01 1.7818052e-01 6.2877971e-01]
[5.8375727e-02 1.0351211e-01 8.3811218e-01]
[3.3485282e-03 5.2936333e-03 9.9135780e-01]
[2.1252513e-02 3.4929726e-02 9.4381779e-01]
[4.6547498e-03 4.0444736e-03 9.9130076e-01]
[8.4095538e-02 1.3293470e-01 7.8296977e-01]
[6.8616783e-03 1.5771488e-02 9.7736686e-01]
[2.9672135e-03 5.1083490e-03 9.9192446e-01]
[5.7883211e-03 1.1918653e-02 9.8229307e-01]
[2.7834701e-03 7.0321797e-03 9.9018431e-01]
[3.2245289e-03 6.9324719e-03 9.8984307e-01]
[1.1025379e-02 1.6933834e-02 9.7204077e-01]]
and for the same input, I get this from TensorRT:
[[9.33836460e-01 6.61635026e-02 7.27533485e-12]
[9.39429879e-01 6.05701208e-02 3.69911090e-12]
[9.60956275e-01 3.90437581e-02 1.84191551e-12]
[9.52795386e-01 4.72046025e-02 1.99532123e-12]
[9.19843435e-01 8.01565349e-02 5.39801103e-12]
[9.51802194e-01 4.81977351e-02 5.10215741e-12]
[9.62418616e-01 3.75813469e-02 1.31316594e-12]
[9.84232545e-01 1.57674570e-02 1.14268006e-14]
[9.79626715e-01 2.03733463e-02 3.17116023e-14]
[9.95743811e-01 4.25621541e-03 8.78552331e-14]
[9.82334971e-01 1.76650658e-02 3.78184490e-13]
[9.75318611e-01 2.46814489e-02 3.20312830e-12]
[9.79469538e-01 2.05304530e-02 1.96992487e-12]
[9.49763775e-01 5.02362810e-02 7.18239812e-13]
[9.05427277e-01 9.45727080e-02 1.35735251e-11]
[9.17646766e-01 8.23532864e-02 2.70730747e-12]
[8.63423824e-01 1.36576220e-01 4.19122679e-12]
[9.23897922e-01 7.61020854e-02 1.90187167e-12]
[9.45324779e-01 5.46751693e-02 5.95223523e-13]]
My Model has:
- Convolutional layers (using tf.layers.conv2d)
- leaky relu layers (using tf.maximum )
- batch normalization layers (using tf.layers.batch_normalization)
- changing NHWC to NCHW at the beginning and changing NCHS to NHWC at the end (using tf.transpose)
- flatten (using tf.reshape)
- dense layer (using tf.layers.dense)
- softmax (using tf.nn.softmax)
I hope it’s not because one or some of what I’m using is not supported as I spent a lot of time to get to this stage! :(
Thanks…