Wrong Output from TensorRt Model converted from Onnx

Hi ,

i have developed a Convolutional Autoencoder model in Pytorch and converted it with onnx. Everything is working fine i am able to run inference but the output of the model of tensorrt is very different from the output obtained from the model in Pytorch.

Both inference code are developed in python.

To avoid image problem type conversion i load the same input to both model. Load the data from the same file.

def do_inference(context, h_input, d_input, h_output, d_output, stream):
    # Transfer input data to the GPU.
    cuda.memcpy_htod_async(d_input, h_input, stream)
    # Run inference.
    context.execute_async(bindings=[int(d_input), int(d_output)], stream_handle=stream.handle)
    # Transfer predictions back from the GPU.
    cuda.memcpy_dtoh_async(h_output, d_output, stream)
    # Synchronize the stream

#tensorrt Inference
array = np.loadtxt("/home/velab/Desktop/model_input.txt")
image_arr = array.ravel()
np.copyto(pagelocked_buffer, image_arr)
do_inference(context, h_input, d_input, h_output, d_output, stream)
h_output = torch.from_numpy(h_output).float().cuda()
imagePIL= transforms.ToPILImage()(h_output.cpu().view(-1, 64, 64)

# Pytorch Script 

array = np.loadtxt("/home/velab/Desktop/model_input.txt")
tensor_image_in = torch.from_numpy(array).float().cuda()
image_inputPIL = transforms.ToPILImage()(tensor_image_in.cpu().view(-1, 64, 64))

The output image from Pytorch Script is correct but the Output image from tensorrt is not correct.
Could someone provide me any feedback why is not working?


Can you provide the following information so we can better help?
Provide details on the platforms you are using:
o Linux distro and version
o GPU type
o Nvidia driver version
o CUDA version
o CUDNN version
o Python version [if using python]
o Tensorflow version
o TensorRT version
o If Jetson, OS, hw versions

Also, if possible please share the script and model file to reproduce the issue.