Description
I created a simple model with single convolution layer with an activation layer(RELU) and made a TRT engine as given in the TensorRT/samples/python/network_api_pytorch_mnist . The model is performing well, while I use the model. predict method in Tensorflow, But after converting the model Into TRT engine, The performance is poor.
I will attach mine and the Nvidia GIT repository, Can anyone please take a lot and help me out.
Environment
TensorRT Version : 8.0
GPU Type : GTX 1050 TI
Nvidia Driver Version : 470.57.02
CUDA Version : 11.4
CUDNN Version : 8.2
Operating System + Version : Ubuntu 20.04
Python Version (if applicable) : 3.8
TensorFlow Version (if applicable) : 2.5
PyTorch Version (if applicable) :
Baremetal or Container (if container which image + tag) :
Relevant Files
https://github.com/Ragu2399/TRT-for-Mean-Filter
Steps To Reproduce
Please include:
- Exact steps/commands to build your repro
- Exact steps/commands to run your repro
- Full traceback of errors encountered