Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version:7.0
• JetPack Version (valid for Jetson only)
• TensorRT Version8.6
• NVIDIA GPU Driver Version (valid for GPU only) 12.2
• Issue Type( questions, new requirements, bugs) bugs
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
docker image:nvcr.io/nvidia/deepstream:7.0-gc-triton-devel
GPU: 4070 8GB
I am using yolov8 as back to back detectors in python
I am using 1 primary infer and multiple secondry infer using nvinfer plugin.
I need all secondary nvinfer to have same batch size.
My problem is when engine file is created for one secondary infer, it replaces the engine file created by other secondary infer because of deepstream default naming of the engine file (model_b6_gpu0_fp32.engine) and all engine files are created in cwd. I want
I tried passing the path “model-engine-file” but still it saves the engine file in cwd directory
Saw some other issues where it was mentioned, model-engine-file will be saved at the same place of “onnx-file” but still it is saving the engine file in cwd
I need to save the the engine file at a dedicated path
Attaching my infer config for reference
[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-color-format=0
onnx-file=weights/parts/bodyPartDetectionModel.onnx
#model engine file will be created on its own.
model-engine-file=weights/parts/model_b6_gpu0_fp32.engine
#int8-calib-file=calib.table
labelfile-path=parts_labels.txt
batch-size=6
network-mode=0
num-detected-classes=6
interval=0
gie-unique-id=4
process-mode=2
network-type=0
cluster-mode=2
maintain-aspect-ratio=1
symmetric-padding=1
#workspace-size=2000
parse-bbox-func-name=NvDsInferParseYolo
#parse-bbox-func-name=NvDsInferParseYoloCuda
custom-lib-path=/opt/nvidia/deepstream/deepstream-7.0/DeepStream-Yolo/nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
engine-create-func-name=NvDsInferYoloCudaEngineGet
[class-attrs-all]
nms-iou-threshold=0.45
pre-cluster-threshold=0.5
topk=300
Following things I tried:
Passing absolute paths, still engine file is saved in cwd.
Checked the Permission chmod 755 /opt/nvidia/deepstream/deepstream-7.0/app/weights/part
Tried to pause other nvinfers and manually copied engine file in /opt/nvidia/deepstream/deepstream-7.0/app/weights/part
and this is able to read successfully but I want to avoid this step if possible.