Wrong location for engine file generation

• Hardware Platform (Jetson / GPU) : GPU
• DeepStream Version : 7.0
• TensorRT Version : 8.6.1
• NVIDIA GPU Driver Version (valid for GPU only) : 535.171.04
• Issue Type( questions, new requirements, bugs) : questions

Hello!
I have some issues trying to generate the .engine files for models.

I have 3 models, and all of them generated a new engine file (by nvinfer) after moving to a new machine, but only one of them does it in the correct location (where the model file is). I have seen this post, so I have set the permissions on the folder of the models, but still 2 engine files are saved to ~/app/model_b1_gpu0_fp32.engine in the Docker container.

nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2141> [UID = 1]: serialize cuda engine to file: /home/ds_tracker/app/model_b1_gpu0_fp32.engine successfully

Can you provide some insight on what could be happening?

In your configuration file, how do you set the value of model-engine-file?

How did you generate the engine file? Can you share the command line? Is it in Docker?

Hello @junshengy,
Thank you for your answer.

For the one model that is saved correctly, it is set like this:

model-engine-file=/home/ds_tracker/models/ocr_recognition/lp_ocr_104_32_nchw_b1_gpu0_fp32.engine

For this model, the output was written to:

/home/ds_tracker/models/ocr_recognition/lp_ocr_104_32_nchw.onnx_b1_gpu0_fp32.engine

For the models that are not saved, it is set like this:

model-engine-file=../models/object_detection/yolov8s_b1_gpu0_fp32.engine

or this:

model-engine-file=/home/ds_tracker/models/lp_detection/yolov7_lp_detection_b1_gpu0_fp32.engine

As for this, the engine file is generated automatically when I am running the Docker container, that contains the pipeline with the appropriate nvinfer elements. The docker container that builds this before use is based on the nvcr.io/nvidia/deepstream:7.0-gc-triton-devel image.