Yolo type is not defined from config file name - Multistream

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) RTX 2060 Super
• DeepStream Version 5.0
• TensorRT Version 7.0.0.11
• NVIDIA GPU Driver Version (valid for GPU only) 455
• Issue Type( questions, new requirements, bugs) bug
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
You can reproduce issue by running the deepstream_imagedata-multistream.py script using custom yolo on more than one video (more than one stream)

I am trying to run deepstream_imagedata-multistream.py using custom yolo, It works fine and everything on one video/stream but it crashes on more than one, producing the following errors, this is the output log:

/usr/bin/python3.6 /opt/nvidia/deepstream/deepstream-5.0/sources/deepstream_python_apps/apps/DeepstreamDeploy_AccidentDetection/deepstream_imagedata-multistream.py file:/home/mina-abdelmassih/deepstream/accident_video_1.264 file:/home/mina-abdelmassih/deepstream/accident_video_1.264 frame
Frames will be saved in  frame
Creating Pipeline 
 
Creating streamux 
 
Creating source bin
source-bin-00
Creating source bin
source-bin-01
Creating Pgie 
 
Creating nvvidconv1 
 
Creating filter1 
 
Creating tiler 
 
Creating nvvidconv 
 
Creating nvosd 
 
Creating EGLSink 

WARNING: Overriding infer-config batch-size 1  with number of sources  2  

Adding elements to Pipeline 

Linking elements in the Pipeline 

Now playing...
1 :  file:/home/mina-abdelmassih/deepstream/accident_video_1.264
2 :  file:/home/mina-abdelmassih/deepstream/accident_video_1.264
Starting pipeline 

Warn: 'threshold' parameter has been deprecated. Use 'pre-cluster-threshold' instead.
WARNING: ../nvdsinfer/nvdsinfer_func_utils.cpp:36 [TRT]: Current optimization profile is: 0. Please ensure there are no enqueued operations pending in this context prior to switching profiles
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:685 [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input           3x608x608       
1   OUTPUT kFLOAT boxes           22743x1x4       
2   OUTPUT kFLOAT confs           22743x12        

Exiting app

0:00:01.689575040  2137     0x1205b320 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1701> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-5.0/sources/deepstream_python_apps/apps/DeepstreamDeploy_AccidentDetection/yolov4_finetune.engine
0:00:01.689606509  2137     0x1205b320 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:1642> [UID = 1]: Backend has maxBatchSize 1 whereas 2 has been requested
0:00:01.689611238  2137     0x1205b320 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1813> [UID = 1]: deserialized backend context :/opt/nvidia/deepstream/deepstream-5.0/sources/deepstream_python_apps/apps/DeepstreamDeploy_AccidentDetection/yolov4_finetune.engine failed to match config params, trying rebuild
0:00:01.691055706  2137     0x1205b320 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1715> [UID = 1]: Trying to create engine from model files
Yolo type is not defined from config file name:
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:797 Failed to create network using custom network creation function
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:862 Failed to get cuda engine from custom library API
0:00:01.691269825  2137     0x1205b320 ERROR                nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1735> [UID = 1]: build engine file failed
0:00:01.691276918  2137     0x1205b320 ERROR                nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1821> [UID = 1]: build backend context failed
0:00:01.691280582  2137     0x1205b320 ERROR                nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1148> [UID = 1]: generate backend failed, check config file settings
0:00:01.691401198  2137     0x1205b320 WARN                 nvinfer gstnvinfer.cpp:809:gst_nvinfer_start:<primary-inference> error: Failed to create NvDsInferContext instance
0:00:01.691404461  2137     0x1205b320 WARN                 nvinfer gstnvinfer.cpp:809:gst_nvinfer_start:<primary-inference> error: Config file path: dstest_imagedata_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Error: gst-resource-error-quark: Failed to create NvDsInferContext instance (1): gstnvinfer.cpp(809): gst_nvinfer_start (): /GstPipeline:pipeline0/GstNvInfer:primary-inference:
Config file path: dstest_imagedata_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED

Process finished with exit code 0

This is also the configuration file:

   ################################################################################
    # Copyright (c) 2018-2020, NVIDIA CORPORATION. All rights reserved.
    #
    # Permission is hereby granted, free of charge, to any person obtaining a
    # copy of this software and associated documentation files (the "Software"),
    # to deal in the Software without restriction, including without limitation
    # the rights to use, copy, modify, merge, publish, distribute, sublicense,
    # and/or sell copies of the Software, and to permit persons to whom the
    # Software is furnished to do so, subject to the following conditions:
    #
    # The above copyright notice and this permission notice shall be included in
    # all copies or substantial portions of the Software.
    #
    # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
    # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
    # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
    # THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
    # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
    # FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
    # DEALINGS IN THE SOFTWARE.
    ################################################################################

    # Following properties are mandatory when engine files are not specified:
    #   int8-calib-file(Only in INT8)
    #   Caffemodel mandatory properties: model-file, proto-file, output-blob-names
    #   UFF: uff-file, input-dims, uff-input-blob-name, output-blob-names
    #   ONNX: onnx-file
    #
    # Mandatory properties for detectors:
    #   num-detected-classes
    #
    # Optional properties for detectors:
    #   cluster-mode(Default=Group Rectangles), interval(Primary mode only, Default=0)
    #   custom-lib-path
    #   parse-bbox-func-name
    #
    # Mandatory properties for classifiers:
    #   classifier-threshold, is-classifier
    #
    # Optional properties for classifiers:
    #   classifier-async-mode(Secondary mode only, Default=false)
    #
    # Optional properties in secondary mode:
    #   operate-on-gie-id(Default=0), operate-on-class-ids(Defaults to all classes),
    #   input-object-min-width, input-object-min-height, input-object-max-width,
    #   input-object-max-height
    #
    # Following properties are always recommended:
    #   batch-size(Default=1)
    #
    # Other optional properties:
    #   net-scale-factor(Default=1), network-mode(Default=0 i.e FP32),
    #   model-color-format(Default=0 i.e. RGB) model-engine-file, labelfile-path,
    #   mean-file, gie-unique-id(Default=0), offsets, process-mode (Default=1 i.e. primary),
    #   custom-lib-path, network-mode(Default=0 i.e FP32)
    #
    # The values in the config file are overridden by values set through GObject
    # properties.

    [property]
    gpu-id=0
    net-scale-factor=0.0039215697906911373
    # model-file=../../../../samples/models/Primary_Detector/resnet10.caffemodel
    # proto-file=../../../../samples/models/Primary_Detector/resnet10.prototxt
    model-engine-file=yolov4_finetune.engine
    labelfile-path=labels_finetune.txt
    # int8-calib-file=../../../../samples/models/Primary_Detector/cal_trt.bin
    force-implicit-batch-dim=1
    batch-size=1
    process-mode=1
    model-color-format=0
    network-mode=2
    num-detected-classes=12
    interval=0
    gie-unique-id=1
    output-blob-names=conv2d_bbox;conv2d_cov/Sigmoid
    engine-create-func-name=NvDsInferYoloCudaEngineGet
    parse-bbox-func-name=NvDsInferParseCustomYoloV4
    custom-lib-path=libnvdsinfer_custom_impl_YoloV4_12classes_carTruckBus.so
    ## 0=Group Rectangles, 1=DBSCAN, 2=NMS, 3 = None(No clustering)
    cluster-mode=1

    [class-attrs-all]
    threshold=0.2
    eps=0.7
    minBoxes=1

    #Use the config params below for dbscan clustering mode
    [class-attrs-all]
    detected-min-w=4
    detected-min-h=4
    minBoxes=3

Also, It works fine with multiple streams using the default parameters in the configuration file.
Thanks.

@Amycao @kayccc @mchi @Fiona.Chen

According to error below,

Yolo type is not defined from config file name:
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:797 Failed to create network using custom network creation function

it failed on cudaEngineGetFcn (…), that is, NvDsInferYoloCudaEngineGet() per your config file.
Could you check why it failed in NvDsInferYoloCudaEngineGet(), the corresponding failure log is - “Yolo type is not defined from config file name:”

File: /opt/nvidia/deepstream/deepstream-5.0/sources/libs/nvdsinfer/nvdsinfer_model_builder.cpp

 /* Get already built CUDA Engine from custom library. */
std::unique_ptr<TrtEngine>
TrtModelBuilder::getCudaEngineFromCustomLib(
   ...)
{
...
    /* Get the  cuda engine from the library */
    nvinfer1::ICudaEngine *engine = nullptr;
    if (cudaEngineGetFcn && (!cudaEngineGetFcn (m_Builder.get(),
                (NvDsInferContextInitParams *)&initParams,
                modelDataType, engine) ||
            engine == nullptr))
    {
        dsInferError("Failed to create network using custom network creation"
                " function");
        return nullptr;
    }
..
}
1 Like

Sorry for the late reply, It was due to the holidays,

The problem was due to this warning:
nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:1642> [UID = 1]: Backend has maxBatchSize 1 whereas 2 has been requested

When i generated a model with 2 as batch size it worked.

Right now I am facing a different problem which is that i am trying to generate a yolo with dynamic batch size,
I generated the dynamic model using this command:
/home/mina-abdelmassih/deepstream/TensorRT-7.1.3.4/bin/trtexec --onnx=yolov4-608.onnx --explicitBatch --optShapes=000_net:16x3x608x608 --maxShapes=000_net:32x3x608x608 --minShapes=000_net:1x3x608x608 --shapes=000_net:8x3x608x608 --saveEngine=yolov4_-1_3_608_608_dynamic.engine --workspace=4096 --fp16

And it worked fine also, I managed to do that and it works when loading model using trtexec command like this:

/home/mina-abdelmassih/deepstream/TensorRT-7.1.3.4/bin/trtexec --loadEngine=yolov4_-1_3_608_608_dynamic.engine --shapes=000_net:Nx3x608x608

N here was tested on different values, also i tried the same with Tensorrt version 7.0.0.11 and it worked fine, but when i added the model engine to deepstream5 it gave me the same warning and failed again, also i happened to find this in the onnx export function doc string:

at the moment, it supports a limited set of dynamic models (e.g., RNNs.)

So does this mean that Yolo cannot be used for dynamic batches on deepstream5 or not, and if it is possible how to do that ?

Thanks.

I managed to do it, it works with TensorRT version 7.0.0.11 in converting the model to dynamic, you also need to convert the model to onnx as dynamic as well, also with pytorch version 1.4 not higher.