ONNX TensorRT Inference Gives wrong result

I ran the exact same codes as the Nvidia Github codes on Nvidia GitHub ONNX TensorRT Jupyter Notebook: Using Tensorflow 2 through ONNX.ipynb

But I obtain different and wrong outputs after running inference. After running the keras model, i obtain:

But after doing inference with ONNX TensorRT, it gives:

I am using Windows 11 WSL2 Ubuntu 20.04 and 18.04, and I have installed:

cuda-11-6 libcudnn8=8.4.1.50-1+cuda11.6 libcudnn8-dev=8.4.1.50-1+cuda11.6

For TensorRT, the following libraries were installed:

Other things i did during installation was:

  1. Copy trtexec file from /usr/src/tensorrt/bin to /usr/local/bin/. Initially the command was not found, but i was able to exceute trtexec after that.
  2. Copy libcurand.so libcurand.so.10 from /usr/local/cuda/lib64/ to /usr/lib/x86_64-linux-gnu/. Initially I cannot build wheel for installation of pycuda, because error: lcurand not found, but I was able to pip install pycuda after that.

Not sure why I am encountering the error, because I ran the same codes on Windows 11 TensorRT and I was able to reproduce the results. But I get wrong results on WSL Ubuntu.

Hi,

Could you please share with us the sample notebook link you’re referring to try from our end for better debugging.

Thank you.

I am not sure why I cannot paste the link. It says: ‘Sorry, you can’t post the word ‘quickstart IntroNotebooks’; it’s not allowed.’ when I tried replying with the link. But I can do this:

https://github.com/NVIDIA/TensorRT/blob/main → quickstart → IntroNotebooks → 3.%20Using%20Tensorflow%202%20through%20ONNX.ipynb

Hi,

We couldn’t reproduce the issue.

After doing inference with ONNX TensorRT:

Please make sure you’re installation is correct.
If you’re interested you can use the TensorRT NGC container as well to avoid setup issues.

Thank you.

I think people are experiencing similar issues with WSL2. Check out:

https://github.com/NVIDIA/TensorRT/issues/2069