Engine File default location

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
GPU
• DeepStream Version
6.1
**• TensorRT Version**
8.2.5
• NVIDIA GPU Driver Version (valid for GPU only)
535.183.01

Iv’e been using this repo GitHub - marcoslucianops/DeepStream-Yolo: NVIDIA DeepStream SDK 7.0 / 6.4 / 6.3 / 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 / 5.1 implementation for YOLO models
for a while , and when I give it an onnx file without an engine it creates an enigne at ~ called somwthing like model_0_gpu32.engine
I know I can move to trtexec like another topic here, but I want to stay with this method.
is there any way to force the name and path of the engine that is created?

You can set the value of model-engine-file in config_infer_primary_xxx.txt file.

The *.engine file will be created at force path and name.

I did set this paramter , but nothing happend…

  1. If your engine file is converted by trtexec, and the values ​​of batch-size and network-type in the configuration file are consistent with the trtexec parameters, the engine file specified by model-engine-file will be loaded by default.

2.If the engine file is generated by DeepStream SDK, Will be named according to DeepStream’s rules.
refer the function of TrtModelBuilder::buildModel in /opt/nvidia/deepstream/deepstream/sources/libs/nvdsinfer/nvdsinfer_model_builder.cpp

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.