Model-file-engine path didn't work for yolo

• Hardware Platform: GPU RTX2080
• DeepStream Version: 5.0
• TensorRT Version: 7.0.0.11
• NVIDIA GPU Driver Version:440.59

I faced problem with yolo model serialization path. I’m using nvcr.io/nvidia/deepstream:5.0-dp-20.04-base docker image. I have following folders structure:

/opt/app/
|______ 20200428-000000.mp4
|______ src/
|______models/yolov3-tiny.cfg,yolov3-tiny.weights
|______configs/config.txt

I want to run my application in src folder and save .engine file to models/ folder. Config.txt contains following:

net-scale-factor=0.0039215697906911373
model-file=../models/yolov3-tiny.weights
custom-network-config=../models/yolov3-tiny.cfg
model-engine-file=../models/model_b1_gpu0_fp32.engine
batch-size=1
process-mode=1
# 0=RGB, 1=BGR
model-color-format=0
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=0
num-detected-classes=80
gie-unique-id=1
network-type=0
## 0=Group Rectangles, 1=DBSCAN, 2=NMS, 3=DBSCAN+NMS Hybrid, 4=None(No clustering)
cluster-mode=2
maintain-aspect-ratio=1
parse-bbox-func-name=NvDsInferParseCustomYoloV3Tiny
custom-lib-path=../libs/yolo/libnvdsinfer_custom_impl_Yolo.so
engine-create-func-name=NvDsInferYoloCudaEngineGet
#scaling-filter=0
#scaling-compute-hw=0

[class-attrs-all]
nms-iou-threshold=1.
threshold=1.1

## Person class configuration
[class-attrs-0]
nms-iou-threshold=0.3
#threshold=0.7

I’m running following command in src folder:

gst-launch-1.0 uridecodebin uri=file:///opt/app/20200428-000000.mp4 ! m.sink_0 nvstreammux name=m batch-size=1 width=1280 height=720 ! queue ! nvinfer config-file-path=/opt/app/configs/config.txt ! fakesink

and model_b1_gpu0_fp32.engine saving to src folder, not models folder, as I specified in config. Could you advise what I do wrong or is it bug?

Hi @ryabokon_dv
I can reproduce this issue.

According to below code in file /opt/nvidia/deepstream/deepstream-5.0/sources/libs/nvdsinfer/nvdsinfer_model_builder.cpp, both the model name and path are re-constructed.
Will check internally if the path and name can be customized via configure file.

/* Build the model and return the generated engine. */
std::unique_ptr<TrtEngine>
TrtModelBuilder::buildModel(const NvDsInferContextInitParams& initParams,
    std::string& suggestedPathName)
{
   ...
    /* Construct the suggested path for engine file. */
    suggestedPathName =
        modelPath + "_b" + std::to_string(initParams.maxBatchSize) + "_" +
        devId + "_" + networkMode2Str(networkMode) + ".engine";
    return engine;
}

Thanks!

Hi @ryabokon_dv,
Sorry!
The issue you observed is expected, as you can find in the developer guide , model-engine-file is the Pathname of the serialized model engine file, that is, it’s just used for the DS to load the engine.

Thanks!