Does having different environment give you different result?


Im trying to covert a onnx file to trt engine And when i execute the convertion on google colab (which uses tesla k80) it works perfectly like the file size would decrease about 50% and the inference would speed up by 300% but when i do the same (same file, same command options)on my local desktop (gtx 1060 3gb, windows10) the file size would actually increase and inference speed would be only about 5% faster. I know inference speed can be affected by environment but can it also make different results when converting from onnx to tensorrt like my case? Or did i do something wrong?.?


TensorRT Version:
GPU Type: gtx1060 3gb
Nvidia Driver Version: 512.15
CUDA Version: 11.3
CUDNN Version: 8.2.1
Operating System + Version: windows10 home
Python Version (if applicable): 3.8.10
TensorFlow Version (if applicable):
PyTorch Version (if applicable): 1.110+cu113
Baremetal or Container (if container which image + tag):

Relevant Files

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)

Steps To Reproduce

Please include:

  • Exact steps/commands to build your repro
  • Exact steps/commands to run your repro
  • Full traceback of errors encountered

Please refer to the installation steps from the below link if in case you are missing on anything

Also, we suggest you to use TRT NGC containers to avoid any system dependency related issues.