Problem for TensorRT and Geforce RTX 2080


When I run the code on this page ( , for two different Environment configurations, we get two different results, in which the first is right. (since I cannot upload picture here, so I just put the issue link on GitHub here :

  1. config1: GeForce RTX 2070, Cuda V10.1.243, Cudnn 7.6.3, Driver 418.87.01, TensorRT

  2. config2 : GeForce RTX 2080, Cuda 10.0, V10.0.130, Cudnn 7.5.0/ Cudnn 7.6.3, Driver: 410.48. TensorRT

Since everything goes right under below configurations:

  1. GeForce RTX 2080 Ti without TensorRT
  2. GeForce RTX2070 with TensorRT
  3. GeForce RTX2070 without TensorRT

but goes wrong under RTX 2080 with TensorRT, so we just think that maybe something is not correctly working inside TensorRT under RTX 2080.

Could you please help to investigate this issue with TensorRT working under RTX 2080, thank you very much !


There are lots of new features and fixes in TRT 7.
I will suggest you to migrate your code to TRT 7, it should fix the issue.