Gst-nvinfer using custom 'engine-create-func-name' doesn't save engine file according to 'model-engine-file'

• x86/ RTX 2080
• DeepStream Version: 6.1
• Using docker container: nvcr.io/nvidia/deepstream:6.1-devel

For the Gst-nvinfer plugin, I’ve provided my own custom engine creation function, configured via engine-create-func-name in the config file, and this is working as expected (engine is created and is able to perform inference as expected).

However, I’ve noticed that when my code returns the engine to the Gst-nvinfer plugin, the plugin saves the engine to a generic path, named like model_b{batch_sz)_gpu{gpu_idx}_{network_mode}.engine and placed in the current working directory, instead of saving it to the path configured in the model-engine-file (specified in the config file–I’m using the yaml version if that matters).

This results in the engine being rebuilt on each run of the pipeline, which is not ideal.
I do this for 2 different models, one a classifier, and one a detector; it doesn’t seem specific to the model–any model model should do for reproducing.

As a workaround, I was able to save the engine from my own code before creating the engine, like so:

    NvInferUniquePtr<nvinfer1::IHostMemory> plan{
        builder->buildSerializedNetwork(*network, *config)};
    if (!plan) {
        std::cerr << "Failed to build network plan" << std::endl;
    } else {
        // Note: due to a (possible?) bug in deepstream, the model file returned
        // from this function will be saved with incorrect name, thus not found
        // under the configured model-engine-file-path value for next time.
        std::ofstream p(initParams->modelEngineFilePath, std::ios::binary);
        if (!p) {
            std::cerr << "Could not open engine output file "
                      << initParams->modelEngineFilePath << std::endl;
        } else {
            p.write(reinterpret_cast<const char*>(plan->data()), plan->size());
        }
    }

This tells me that the model-engine-file value is available since I can use it from my code.
When I save the file myself, then on the next run of the pipeline, the engine is found and loaded (and my custom engine create function is not called).

I’m wondering–did I miss some configuration or is this a bug? Or maybe it’s the intended behavior? But if it’s the intended behavior, why is the engine file saved at this alternate location?

This should be reproducible with the sources/objectDetector_Yolo sample; I did notice in that readme, that

The first-time a “model_b1_int8.engine” would be generated as the engine-file

This seems like a problem because if I have more than one model with batch size 1 using fp16 (which I do), then the model files will have the same path name of model_b1_gpu0_fp16.engine

Please refer to TrtModelBuilder::buildModel() function in /opt/nvidia/deepstream/deepstream/sources/libs/nvdsinfer/nvdsinfer_model_builder.cpp.

The model engine path is created here.

......
if (cudaEngineGetFcn || cudaEngineGetDeprecatedFcn ||
            !string_empty(initParams.tltEncodedModelFilePath))
    {
====        if (cudaEngineGetFcn || cudaEngineGetDeprecatedFcn)**
====        {
====        /* NvDsInferCudaEngineGet interface provided. */**
====            char *cwd = getcwd(NULL, 0);**
====            modelPath = std::string(cwd) + "/model";**
====            free(cwd);**
====        }
        else
        {
            /* TLT model. Use NvDsInferCudaEngineGetFromTltModel function
             * provided by nvdsinferutils. */
            cudaEngineGetFcn = NvDsInferCudaEngineGetFromTltModel;
            modelPath = safeStr(initParams.tltEncodedModelFilePath);
        }

        engine = getCudaEngineFromCustomLib (cudaEngineGetDeprecatedFcn,
                cudaEngineGetFcn, initParams, networkMode);
        if (engine == nullptr)
        {
            dsInferError("Failed to get cuda engine from custom library API");
            return nullptr;
        }
    }

@Fiona.Chen Thanks; that explains why it’s suggesting that model path.

Doesn’t it seem like it would be better to use initParams.modelEngineFilePath (if specified)–especially since there’s a possibility of duplication in the suggestedPathName value?

Anyway, this seems like my current workaround of serializing the engine myself is probably my best option for now.

The code is open source, you can change as you like.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.