Faild to build engine file

Please provide complete information as applicable to your setup.

**• Hardware Platform Orin AGX
**• DeepStream Version 6.3
**• JetPack Version 5.1.2
**• TensorRT Version 8.5.2
• Issue Type( question )
**• How to reproduce the issue ?

we have trained yolov4 model on the NVIDIA tao container v3.21 and made “.etlt” and “.bin” files, and then create the “.engine” file on NVIDIA tao container v4.0.0. and created the .engine file there. Then when we want to test on deepstream 6.3. So, we replaced these .etlt , .bin and .engine file in deepstream sample config file. Even though, I have uploaded the engine file, it still tries to create the engine file and I receive this error.

ERROR: Failed to build network, error in model parsing.
ERROR: [TRT]: 3: [builder.cpp::~Builder::307] Error Code 3: API Usage Error (Parameter check failed at: optimizer/api/builder.cpp::~Builder::307, condition: mObjectCounter.use_count() == 1. Destroying a builder object before destroying objects it created leads to undefined behavior.
)
ERROR: Failed to create network using custom network creation function
ERROR: Failed to get cuda engine from custom library API
0:00:05.983046893 13088 0xaaab07325c70 ERROR                nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2022> [UID = 1]: build engine file failed
Segmentation fault

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

The engine should be generated with the exactly same compute stack and GPU. Can you generate the engine with “tao-converter” in the same enviroment of DeepStream? TAO Converter | NVIDIA NGC

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.