If your engine file is converted by trtexec, and the values of batch-size and network-type in the configuration file are consistent with the trtexec parameters, the engine file specified by model-engine-file will be loaded by default.
2.If the engine file is generated by DeepStream SDK, Will be named according to DeepStream’s rules.
refer the function of TrtModelBuilder::buildModel in /opt/nvidia/deepstream/deepstream/sources/libs/nvdsinfer/nvdsinfer_model_builder.cpp
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks