the precision of the results between .pb and .uff are not the same

Hi all,
I trained a Tensorflow model using network of lenet5 for the binary classification. and froze it to .pb, converted it to .uff format. but the precision of the results bewteen .pb and .uff are not the same(with the same input), here are the results:

the result of the pb: [-2.21338582 3.26708937]
tee result of the uff: [-2.2126033 3.2662733]

and I compare the output of the moddile layers, the results are below:

(1)layer1-conv1/Conv2D

the result of the pb: [-4.77897644e-01 -4.29949582e-01 -2.56202489e-01 … 7.33150244e-02 2.54558742e-01 3.09298754e-01]

the result of the uff:[-0.47789764 -0.42994958 -0.2562025 … 0.07331502 0.25455874 0.30929875]
(2)layer2-pool1/MaxPool

the result of the pb:[0.59630603 0.98900318 0.83609992 … 0.00706919 0.0718873 0.30787104]

the result of the uff:[0.596306 0.9890032 0.8360999 … 0.00706919 0.0718873 0.30787104]
(3)layer3-conv2/Conv2D
the result of the pb:[0.81891322 0.15238184 -0.15142109 … 0.15384331 0.54967135 0.49998853]
the result of the uff:[ 0.81894773 0.1524996 -0.15138532 … 0.15384705 0.54966927 0.49998504]

the wierd thing is that after the first conv the result are almost the same, but after the second conv the results are not. I don’t know why.

Hello,

Are you comparing the result between TF-TRT(.pb) and TRT(.uff)?

Yes. And I did some other experiments to find the reason.
Experiment 1: I built a network that have only one convolutional layer, and get the results:
(1) The input shape is 16163, the kernel size of the convolutional layer is 55, and the output channel is 32:
the output of the pb: [ 0.02732173 0.18055116 -0.45923412 …, -0.07804864 -0.05366367 -0.07163196]
the output of the trt: [ 0.02732173 0.18055116 -0.45923412 … -0.07804864 -0.05366367 -0.07163196]
(2) The input shape is 16
1632, the kernel size of the convolutional layer is 55, and the output channel is 32:
the output of the pb:[-0.44078344 -0.16376069 -0.65598416 …,-0.03106553 -0.80909407 -1.52670085]
the output of the trt: [-0.43814895 -0.16361836 -0.65496826 … -0.03120333 -0.80908704 -1.5267904 ]

Experiment 2: change the input shape:
(1) The input shape is 32323, the kernel size of the convolutional layer is 55, and the output channel is 32:
the output of the pb:[ 0.2247436 0.02252826 0.21561101 …,-0.23513959 -0.58312702 -0.21401025]
the output of the trt: [ 0.2247436 0.02252826 0.21561101 … -0.2351396 -0.583127 -0.21401025]
(2) The input shape is 32
3232, the kernel size of the convolutional layer is 55, and the output channel is 32:
the output of the pb: [-0.26229396 1.09917772 1.6225214 …,-0.1070288 0.2439363 -1.00126266]
the output of the trt: [-0.26229388 1.0991774 1.6225209 … -0.10702881 0.2439363 -1.0012628 ]

I want to know the diffenence in the convolutional between tensorflow and tensorrt,

Actually, I am comparing the result between TF(.pb) and TRT(.uff)

Hello,

I suspect it’s the padding causing the problem. In TensorFlow padding is asymmetric. This means, the “SAME” padding in TensorFlow tries to pad evenly left and right, but if the amount of columns to be added is odd, it will add the extra column to the right, the same logic applies vertically (there may be an extra row of zeros at the bottom). TensorRT, on the other hand, pads symmetrically (it will increment the number of columns if it is an odd number, and pads evenly on both side).

Thanks.