tensorrt inference result is wrong and inconsistent with caffe inference result

Dear nvidia expert,

I encountered a problem, which need your help.

I have trained a Chinese vehicle plate OCR recognition CNN network.

I tested the trained model via caffe with a picture named test.jpg, and got the expected OCR result, just as attached caffe_result.png shows.
Then I converted caffe model to tensorrt float32 engine and tested the converted engine via tensorrt C++ inference API. Unfortunately the OCR result is wrong(Please refer to attached trt_result.png). It seems that the tensorrt result always miss the first or the last character.

I attached the caffemodel, prototxt, converted tensorrt engine, and tensorrt / caffe inference test codes. Please help me to fix this problem.

Besides, my env is listed below:

OS: Ubuntu 16.04
cuda driver: 418.39
GPU card: tesla T4
tensorrt: 5.1.2.1+cuda10.1
cudnn: 7.5
cuda: 10.1

caffe: 1.1.0
newlp_attach.7z (15.4 MB)

Hi,

Could you please try latest TRT 7 release?

If issue persist, please share the error log and caffe model prototxt file as well.

Thanks

Dear @SunilJB

We upgraded the tensorRT to TRT 7 and rebuilt the engine but got the same wrong results. We are building Caffe SSD 512 and the env configuration is:

OS: Ubuntu 16.04
cuda driver: 410.48
GPU card: RTX 2080 TI
tensorrt: 7.0.0.1+cuda10.0
cudnn: 7.5
cuda: 10.0