NvDsInferCudaEngineGetFromTltModel: TLT encoded model file path not provided

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

        /* TLT model. Use NvDsInferCudaEngineGetFromTltModel function
         * provided by nvdsinferutils. */
        cudaEngineGetFcn = NvDsInferCudaEngineGetFromTltModel;
        modelPath = safeStr(initParams.tltEncodedModelFilePath);

I have printed the absolute model path, and checked everything is ok. But it raised always this error. I think this function NvDsInferCudaEngineGetFromTltModel is supposed to be non-opened in nvdsinferutils.

what should be the reasons for that?

PS: When I installed deepstream on another pc, the same source code, and this error no more showed.

Do you mean the modelPath is wrong? Can you reinstall deepstream in your device? Quickstart Guide — DeepStream 5.1 Release documentation

the printed modelpath is correct, which could be found with ls command. I reinstalled deepstream and fixed this issue. But I am wondering, how could this error be raised, what the reasons. because This error message cannot be found in deepstream open-source code.

What error message?

ERROR msg: NvDsInferCudaEngineGetFromTltModel: TLT encoded model file path not provided

It may be something missing.