Getting different trtmodel while compiling on the same PC, the same ONNX model

Description

I’m loading ONNX model and converting it to trtmodel using the model builder:

auto builder = UniquePtr<nvinfer1::IBuilder>(nvinfer1::createInferBuilder(g_logger));
const auto explicitBatch = 1U << static_cast<uint32_t>(nvinfer1::NetworkDefinitionCreationFlag::kEXPLICIT_BATCH);
auto network = UniquePtr<nvinfer1::INetworkDefinition>(builder->createNetworkV2(explicitBatch));
auto parser = UniquePtr<nvonnxparser::IParser>(nvonnxparser::createParser(*network, g_logger));

and then I save the serialized model:
auto serializedModel = UniquePtrnvinfer1::IHostMemory(m_cudaEngine->serialize());
std::ofstream modelFile(outputPath.c_str(), std::ofstream::binary|std::ios::trunc);
modelFile.write(static_cast<const char*>(serializedModel->data()), serializedModel->size());
modelFile.close();

the saved serialized model is different whenever I convert it.
I wouldn’t mind that but when I use it I get different (reasonable) outputs for the same input image.

Environment

TensorRT Version: 7.2.1.6
GPU Type: RTX1080TI
Nvidia Driver Version: 511.65
CUDA Version: 11.6
CUDNN Version: 8.0.4
Operating System + Version: windows10.0.19044 Build 19044
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

1 Like

Hi,

Looks like you’re using an old version of the TensorRT. We recommend you to please try on the latest TensorRT version 8.4 EA.

If you still face this issue, request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:

Try running your model with trtexec command.

In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!

1 Like