TensorRT C++ result was wired and changed everytime I do the same inference

Description

Hi! I have a model which was serailized using Python and now I want to use the same model to do the same serializing job but in C++. However, when I have finished my code I found that the model gave me an extreme wired result and every time I ran the same inference code, the result will change.
I have checked the pre-process and post-process code, which was using opencv to do the image transformation and there is no random variable used. Then the only doubt whould be the deserialized model.
So I wonder if the model converted using python will not reproduce the same answer while using C++? Even if the Device and environment stay the same.

Environment

TensorRT Version: 7.2.2
GPU Type: TITAN V
Nvidia Driver Version: 450.51.05
CUDA Version: 11.0
CUDNN Version: 8.0.5
Operating System + Version: Ubuntu 18.04
Python Version (if applicable): 3.6
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Hi,
Can you try running your model with trtexec command, and share the “”–verbose"" log in case if the issue persist

You can refer below link for all the supported operators list, in case any operator is not supported you need to create a custom plugin to support that operation

Also, request you to share your model and script if not shared already so that we can help you better.

Thanks!

Thanks for the reply. Actually I have gave the trtexec “-verbose” log in the link below and no further replies: TensorRT inference process.
Due to this project is owned by the company so I am not able to share the model or the code. But I can give some details about code.

The code was referring to the github:
GitHub - zbw4034/TensorRT-googlenet-opencv: use opencv to read .jpg and accelerate using tensorrt,
where my input is a (256,330,1) gray image, and the model will do the object detection on the image. The output is a (256,330,13) tensor, where 13 means 13 class channels (if this image belongs to class 2 for example, then only channel 2 will have the mask image and other channels will be empty).
And the post-processing stage will extract the correct channel out and display the mask on the original image.
I have changed original code to fit my input and also reformat the output from [256x330x13] float array to Mat. But the problem still exists as I have mentioned in the description(changed everytime I ran the code).
I can ensure that my tensorRT engine works fine in python and gave correct output when I ran.
But works poorly in C++.

Hi @364083042,

It is complex to answer without details of the code. We recommend you to share minimal issue reproducible inference script if possible. You may DM us.

For your reference please refer C++ samples, to follow correct way to write inference script.

Thank you.