Failed to build network since there is no model file matched

I have a question, the version of DS is 6.0, I must upgrade DS6.1?
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:861 failed to build network since there is no model file matched.
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:799 failed to build network.
0:01:35.030324758 14950 0x2261400 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger: NvDsInferContext[UID 2]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1934> [UID = 2]: build engine file failed
0:01:35.030379151 14950 0x2261400 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger: NvDsInferContext[UID 2]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2020> [UID = 2]: build backend context failed
0:01:35.030404174 14950 0x2261400 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger: NvDsInferContext[UID 2]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1257> [UID = 2]: generate backend failed, check config file settings
0:01:35.031215691 14950 0x2261400 WARN nvinfer gstnvinfer.cpp:841:gst_nvinfer_start: error: Failed to create NvDsInferContext instance
0:01:35.031251382 14950 0x2261400 WARN nvinfer gstnvinfer.cpp:841:gst_nvinfer_start: error: Config file path: config_retinaface.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED

Hi there @user70013 and welcome to the NVIDIA developer forums!

It is a bit hard to get the context of your question, but judging from the abbreviation and version number you are talking about the DeepStream SDK, correct?

I took the liberty of moving your post to the DeepStream category, I hope that is ok. If not, let me know and I’ll help you find the right place!

Thanks!

There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

Can you help describe the context before hitting this error?
Please also provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
• The pipeline being used

[quote=“yingliu, post:3, topic:226193”]
How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
i have been solve this question,thanks.

i met a new question,the primary sgie, i use retinaface network and the answer is ok, and i use efficientnet in the secondly sgie, the efficientnet model is to classify Face mask, no mask or with mask. the config.txt of The secondary-gie2 as follows:
gpu-id=0
net-scale-factor=1
offsets=77.5;21.2;11.8
model-engine-file=./efficientnet.engine
labelfile-path=./effLabel.txt
force-implicit-batch-dim=1
batch-size=1

0=FP32 and 1=INT8 mode

network-mode=0
#input-object-min-width=64
#input-object-min-height=64
process-mode=2
model-color-format=1
gpu-id=0
gie-unique-id=2
operate-on-gie-id=1
operate-on-class-ids=0
is-classifier=1
output-blob-names=predictions/Softmax
classifier-async-mode=1
classifier-threshold=0.51
process-mode=2
#scaling-filter=0
#scaling-compute-hw=0
but all the face detected in primary sgie after efficientnet model is with mask, the answer is error. how can i solve this questiones, the reference of secondary sgie is deepstream-test2. and my efficientnet model transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.