Wrong location for engine file generation

• Hardware Platform (Jetson / GPU) : GPU
• DeepStream Version : 7.0
• TensorRT Version : 8.6.1
• NVIDIA GPU Driver Version (valid for GPU only) : 535.171.04
• Issue Type( questions, new requirements, bugs) : questions

Hello!
I have some issues trying to generate the .engine files for models.

I have 3 models, and all of them generated a new engine file (by nvinfer) after moving to a new machine, but only one of them does it in the correct location (where the model file is). I have seen this post, so I have set the permissions on the folder of the models, but still 2 engine files are saved to ~/app/model_b1_gpu0_fp32.engine in the Docker container.

nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2141> [UID = 1]: serialize cuda engine to file: /home/ds_tracker/app/model_b1_gpu0_fp32.engine successfully

Can you provide some insight on what could be happening?

In your configuration file, how do you set the value of model-engine-file?

How did you generate the engine file? Can you share the command line? Is it in Docker?

Hello @junshengy,
Thank you for your answer.

For the one model that is saved correctly, it is set like this:

model-engine-file=/home/ds_tracker/models/ocr_recognition/lp_ocr_104_32_nchw_b1_gpu0_fp32.engine

For this model, the output was written to:

/home/ds_tracker/models/ocr_recognition/lp_ocr_104_32_nchw.onnx_b1_gpu0_fp32.engine

For the models that are not saved, it is set like this:

model-engine-file=../models/object_detection/yolov8s_b1_gpu0_fp32.engine

or this:

model-engine-file=/home/ds_tracker/models/lp_detection/yolov7_lp_detection_b1_gpu0_fp32.engine

As for this, the engine file is generated automatically when I am running the Docker container, that contains the pipeline with the appropriate nvinfer elements. The docker container that builds this before use is based on the nvcr.io/nvidia/deepstream:7.0-gc-triton-devel image.

Hello @junshengy,
Could you give me an update on this thread?
Thank you!

Sorry for the long delay because a holiday.

Do these directories exist? nvinfer does not automatically create the directories.

Usually you should name your engine file as follows

yolov8s.onnx_b1_gpu0_fp32.engine

No problem, thanks for getting back to me!

Yes

I have changed them to these:

  • yolov7_lp_detection.onnx_b1_gpu0_fp32.engine
  • yolov8m.onnx_b1_gpu0_fp32.engine
    But I get the same result :/

Did you define engine-create-func-name in your config file?

You can refer to the following source code in /opt/nvidia/deepstream/deepstream-7.0/sources/libs/nvdsinfer/nvdsinfer_model_builder.cpp.
If you set a value to engine-create-func-name, the engine file will be named to model_xxxx_xx.engine.

You can modify the code here according to your needs.

if (cudaEngineGetFcn || cudaEngineGetDeprecatedFcn ||
            !string_empty(initParams.tltEncodedModelFilePath))
    {
        if (cudaEngineGetFcn || cudaEngineGetDeprecatedFcn)
        {
            /* NvDsInferCudaEngineGet interface provided. */
            char *cwd = getcwd(NULL, 0);
            modelPath = std::string(cwd) + "/model";
            free(cwd);
        }
        else

Yes, both models are Yolo-based, so in their config this is set:

engine-create-func-name=NvDsInferYoloCudaEngineGet

Does this mean that I need to modify the nvdsinfer_model_builder.cpp to be able to receive the engine files?

Yes. There is a workaround, put the yolov7/v8.onnx to different path. then both them can be saved as model_xxxx_xx.engine.

Well, they are already located in different folders:

/home/ds_tracker/models/lp_detection
/home/ds_tracker/models/object_detection

That is why I thought they should be generated to different folders, but they are all put in the /home/ds_tracker/app folder

This is the behavior of getcwd. This function gets the absolute path of the application, so you may need to modify the code.

Thank you!
I have modified the mentioned CPP file and used the ONNX file path to generate paths from for the engine files. Afterwards I have compiled the nvdsinfer lib, and it works now.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.