Deepstream-test3 use myselfmodel question ERROR: failed to build network since there is no model file matched

I use self yolo4tiny when I use single rtsp video, it can run. when I add two rtsp videos, have the errors.

 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 1]: Trying to create engine from model files
ERROR: failed to build network since there is no model file matched.
ERROR: failed to build network.
0:00:07.946620832 24022   0x5580b28c00 ERROR                nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1934> [UID = 1]: build engine file failed
0:00:07.946706560 24022   0x5580b28c00 ERROR                nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2020> [UID = 1]: build backend context failed
0:00:07.946795744 24022   0x5580b28c00 ERROR                nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1257> [UID = 1]: generate backend failed, check config file settings

Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
• The pipeline being used

Hardware Platform (Jetson / GPU) jetson
• DeepStream Version 6.0
• JetPack Version (valid for Jetson only) 4.6

• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
• The pipeline being used using the deepstream sdk :deepstream -test3 using the yolo4.so model

1 there is an error printing “failed to build network since there is no model file matched.”, seemed to can’t find your model, what is your model format? can you provide your configure file?
2 using nvidia’s yolov4-tiny model, I can’t reproduce your issue in deepstream-test3 , here is the model link:deepstream_reference_apps/download_models.sh at master · NVIDIA-AI-IOT/deepstream_reference_apps · GitHub
3 this error code is in deepstream\deepstream\sources\libs\nvdsinfer\nvdsinfer_model_builder.cpp, it is opensource code, you can add some logs to debug.

use deepstream-test3 use command one rtsp camera video , is right. when I use two rtsp video,have
image
error tip

1 now does it run ok using one rtsp source?
2 from the printing, it failed to build network, for the code in TrtModelBuilder::buildNetwork, it is because don’t know the model.
3 can you provide your model and configuration file? I will try to reproduce.
or you can add logs in buildNetwork to check the difference.

yolo4selftiny.engine (62.4 MB)
dstest3_pgie_config.txt (2.0 KB)
labels.txt (11 Bytes)
thank you this is the model and config file

if batch-size changes to 2, app will rebuild engine, you need to set model path.

modefy batch-size to 2 ? why?

when modify batch-size to 2 ,have the same error

02:14:37.967 "../deepstream-test3/deepstream_test3_app.cpp" 280 argc 3
Unknown or legacy key specified 'is-classifier' for group [property]
Now playing: rtsp://admin:hk123456@192.168.1.70/h264/ch1/main/av_stream, rtsp://admin:hk123456@192.168.1.65/h264/ch1/main/av_stream,

Using winsys: x11 
0:00:04.944887872 20053   0x556aae1800 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 1]: deserialized trt engine from :/home/y/deepsample/build-deepstream-test3-Desktop-Release/yolo4selftiny.engine
INFO: [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input           3x416x416       
1   OUTPUT kFLOAT boxes           2535x1x4        
2   OUTPUT kFLOAT confs           2535x1          

0:00:04.945083232 20053   0x556aae1800 WARN                 nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:1833> [UID = 1]: Backend has maxBatchSize 1 whereas 2 has been requested
0:00:04.945127168 20053   0x556aae1800 WARN                 nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2012> [UID = 1]: deserialized backend context :/home/y/deepsample/build-deepstream-test3-Desktop-Release/yolo4selftiny.engine failed to match config params, trying rebuild
0:00:04.970366976 20053   0x556aae1800 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 1]: Trying to create engine from model files
ERROR: failed to build network since there is no model file matched.
ERROR: failed to build network.
0:00:04.971971008 20053   0x556aae1800 ERROR                nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1934> [UID = 1]: build engine file failed
0:00:04.972061344 20053   0x556aae1800 ERROR                nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2020> [UID = 1]: build backend context failed
0:00:04.972100576 20053   0x556aae1800 ERROR                nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1257> [UID = 1]: generate backend failed, check config file settings
0:00:04.972522144 20053   0x556aae1800 WARN                 nvinfer gstnvinfer.cpp:841:gst_nvinfer_start:<primary-nvinference-engine> error: Failed to create NvDsInferContext instance
0:00:04.972560352 20053   0x556aae1800 WARN                 nvinfer gstnvinfer.cpp:841:gst_nvinfer_start:<primary-nvinference-engine> error: Config file path: dstest3_pgie_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Running...
ERROR from element primary-nvinference-engine: Failed to create NvDsInferContext instance
Error details: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(841): gst_nvinfer_start (): /GstPipeline:dstest3-pipeline/GstNvInfer:primary-nvinference-engine:
Config file path: dstest3_pgie_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Returned, stopping playback
Deleting pipeline
Press <RETURN> to close this window...


the logic is in deepstream-test3.c, batch-size is set to 1 in configuration file, but source num is 2.
g_object_get (G_OBJECT (pgie), “batch-size”, &pgie_batch_size, NULL);
if (pgie_batch_size != num_sources) {
g_printerr
(“WARNING: Overriding infer-config batch-size (%d) with number of sources (%d)\n”,
pgie_batch_size, num_sources);
g_object_set (G_OBJECT (pgie), “batch-size”, num_sources, NULL);

}

I know. Modifying this will not affect the results.
Modify batch_size are still the above errors.

if use the same engine ,don’t change the batch size. you can comment out the code above, then have a try.

not modify batch_size , myself classes is only one , the yolo5tiny is 80, Is it possible to make mistakes here

what do you mean? Is this still an issue to support?

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.