Same tensorRT code get different result

First, I wrote the tensorRT-SSD code in gtx1080, and it runs successfully.
Then,Inputed a same picture twice, it got two different result.
for example, SSD output the result contains 4 position values
the first one, 0.4523 0.1240 0.8511 0.4512
the second, 0.4531 0.1237. 0.8531 0.4512
The results of the two outputs will be slightly different.
Although it does not affect the final detection accuracy, I want to know what causes this.

And,I copyed this code to TX2,it also ran successfully.
But, the result of TX2 output is different with gtx1080 output.
For example, inputing a picture contains several objects,gtx1080-tensorRT could detect all objects, but tx2-tensorRT got only one or two.

PC, gtx1080 cuda8.0 cudnnV7 tensorRT3.0.4
tx2, jetpack3.2 tensorRT3.0.3

Hello,

The generated plan file must be retargeted to the specific GPU in case you want to run it on a different GPU.

regards,
NVIDIA Enterprise Support

I generated plan file twice, the plan file of gtx1080 is different with tx2’s

Hello,

2 requests:

  1. can you update to TensorRT 5 and Jetpack 4.1 (trt5) or Jetpack 3.3 (trt4). Many improvements and fixes have been submitted since TRT3.0.

  2. can you share a small repro package containing the code used to generate the plan files, 2 plan files for both GPUs, set of images that demonstrate what you seeing, and any scripts that you are using to run inference. You can DM if you don’t want to post publically.

regards,
NVIDIA Enterprise Support

Thanks for your reply

  1. I will try to update TensorRT

  2. https://github.com/Ghustwb/MobileNet-SSD-TensorRT
    This is my tensorRT code

Any one help me ?
There are 4 pictures, results of detection
The first ,GTX1080_mobileNetSSD_Caffe

The second,GTX1080_mobileNetSSD_TensorRT3.0.4

The third,TX2_mobileNetSSD_Caffe

The last,TX2_mobileNetSSD_TensorRT3.0.4

The premise of getting the above results is that the same code, the same caffemodel, the same input image

As can be seen from the four pictures,

1、The results on the caffe are the same.

2、Same hardware platform,the result of tensorRT is different with Caffe’s.

3、The accuracy on gtx1080 is higher than on tx2. So,Why??

Why the result of tensorRT on TX2 is so bad???

Please keep us updated if updating to the latest Jetpack/TRT helps. If you’d like, please cross post on Jetson forum to get more visibility.

https://devtalk.nvidia.com/default/board/139/jetson-embedded-systems/

Hello
Can I upgrade tensorRT without jetpack?
Is there software package to install tensorRT4?
It is too long, using jetpack

Hello,

Jetpack comes as a bundle, with many dependencies. Not recommended to upgrade TRT independently w/o jetpack.

I came across a similar problem.
I refer this code: https://github.com/saikumarGadde/tensorrt-ssd-easy.

the result will change every time i run the inference, although the variation is slight, the result is below


the results are defferent between two detectons about image-71.

And there is big difference between the 1080ti and jetson tx2,

the confidence about image-71 is much lower, I don’t know why?

I tried your code in the github,seems you have solved this problem? I run many times at 1080ti and they have the same coordinates value, and the result at jetson tx2 has slightly differenence, the results are close to each other.

Could you give me some advices? I would be very grateful.

hi Arleyzhang, have you solved it? i have the same problem, pls help me!