Please provide complete information as applicable to your setup. • Hardware Platform (Jetson / GPU) • DeepStream Version • JetPack Version (valid for Jetson only) • TensorRT Version • NVIDIA GPU Driver Version (valid for GPU only) • Issue Type( questions, new requirements, bugs) • How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) • Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description) • The pipeline being used
Hardware Platform (Jetson / GPU) jetson • DeepStream Version 6.0 • JetPack Version (valid for Jetson only) 4.6
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) • Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description) • The pipeline being used using the deepstream sdk :deepstream -test3 using the yolo4.so model
1 there is an error printing “failed to build network since there is no model file matched.”, seemed to can’t find your model, what is your model format? can you provide your configure file?
2 using nvidia’s yolov4-tiny model, I can’t reproduce your issue in deepstream-test3 , here is the model link:deepstream_reference_apps/download_models.sh at master · NVIDIA-AI-IOT/deepstream_reference_apps · GitHub
3 this error code is in deepstream\deepstream\sources\libs\nvdsinfer\nvdsinfer_model_builder.cpp, it is opensource code, you can add some logs to debug.
1 now does it run ok using one rtsp source?
2 from the printing, it failed to build network, for the code in TrtModelBuilder::buildNetwork, it is because don’t know the model.
3 can you provide your model and configuration file? I will try to reproduce.
or you can add logs in buildNetwork to check the difference.
the logic is in deepstream-test3.c, batch-size is set to 1 in configuration file, but source num is 2.
g_object_get (G_OBJECT (pgie), “batch-size”, &pgie_batch_size, NULL);
if (pgie_batch_size != num_sources) {
g_printerr
(“WARNING: Overriding infer-config batch-size (%d) with number of sources (%d)\n”,
pgie_batch_size, num_sources);
g_object_set (G_OBJECT (pgie), “batch-size”, num_sources, NULL);