• Hardware Platform (Jetson / GPU) : GPU • DeepStream Version : 7.0 • TensorRT Version : 8.6.1 • NVIDIA GPU Driver Version (valid for GPU only) : 535.171.04 • Issue Type( questions, new requirements, bugs) : questions
Hello!
I have some issues trying to generate the .engine files for models.
I have 3 models, and all of them generated a new engine file (by nvinfer) after moving to a new machine, but only one of them does it in the correct location (where the model file is). I have seen this post, so I have set the permissions on the folder of the models, but still 2 engine files are saved to ~/app/model_b1_gpu0_fp32.engine in the Docker container.
nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2141> [UID = 1]: serialize cuda engine to file: /home/ds_tracker/app/model_b1_gpu0_fp32.engine successfully
Can you provide some insight on what could be happening?
As for this, the engine file is generated automatically when I am running the Docker container, that contains the pipeline with the appropriate nvinfer elements. The docker container that builds this before use is based on the nvcr.io/nvidia/deepstream:7.0-gc-triton-devel image.
Did you define engine-create-func-name in your config file?
You can refer to the following source code in /opt/nvidia/deepstream/deepstream-7.0/sources/libs/nvdsinfer/nvdsinfer_model_builder.cpp.
If you set a value to engine-create-func-name, the engine file will be named to model_xxxx_xx.engine.
You can modify the code here according to your needs.
Thank you!
I have modified the mentioned CPP file and used the ONNX file path to generate paths from for the engine files. Afterwards I have compiled the nvdsinfer lib, and it works now.