Error message when generating TensorRT engine file from a TLT model

• Hardware Platform: GPU
• DeepStream Version: 6.2
• TensorRT Version:
• NVIDIA GPU Driver Version: 525.85.12
• Issue Type: Bug

When using nvinfer to generate a TensorRT engine file from a TLT model the following error is displayed:

ERROR: [TRT]: 3: [builder.cpp::~Builder::307] Error Code 3: API Usage Error
(Parameter check failed at: optimizer/api/builder.cpp::~Builder::307, condition: mObjectCounter.use_count() == 1.
Destroying a builder object before destroying objects it created leads to undefined behavior.

We have tested this with the latest versions of several models from NGC and get this result for all models. The engine files are still generated and works well, so this error is probably not critical although it would be better if nvinfer could generate the engine file without any errors.

which sample and model are you testing? is this error fatal? was the engine generated? could you share the whole log?

We have tested with the following models:

All the models above give the same result and I suspect that is the case for all TLT models. The error is not fatal and the generated TensorRT models work well as far as we can tell. Still, there could be problems with the generated models that we are not aware of and it would be preferable if nvinfer could generate the engine file without any errors.

Thanks for your reporting, We are checking this… will feedback to you later.

1 Like

Sorry for the late reply, this issue has been fixed in DeepStream6.3.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.