How should I fix this warning?

I am using the serial model following the case of deepstrem-test2, but I set the second level model as the main model.
The inference is that there is the following warning, is this warning a pipeline problem? How can I fix it?
WARNING: Num classes mismatch. configured: 3, detected by network: 80
I set all model parameters network-type to 0, set process-mode to 1.
The first level primary model is 80 classes and the second level primary model is 3 classes.

  • deepstream-app version 6.1.0
  • DeepStreamSDK 6.1.0
  • CUDA Driver Version: 11.4
  • CUDA Runtime Version: 11.0
  • TensorRT Version: 8.2
  • cuDNN Version: 8.4
  • libNVWarp360 Version: 2.0.1d3
  • device on A6000

Do you use the same model for SGIE the same as PGIE?

Your model has an output of 80 classes, and num-detected-classes seems to be set as 3, you need adapt the setting to 80 to match the model.

It is the same class of model “yolov5s”
The classification detected in the first primary model is 80
in the second primary model of the serial series is 3.

The settings are no problem!
I think it is in deepstream that the serial model second model detection goes to read the output classification of the first model, my first model output is 80 classifications, but my second model classification is only 3 classifications, so it issues the warning above.
This warning should not exist because I set both models of the serial model to be the primary model.

  1. could you share your configuration file?
  2. you can find this warning in deepstream sdk, it is because setting classes number is not the same with NUM_CLASSES_YOLO(80).

The first master model profile:

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-color-format=0
#custom-network-config=yolov5s.cfg
#model-file=yolov5s.wts
model-engine-file=model_b4_gpu0_fp32.engine
#int8-calib-file=calib.table
labelfile-path=labels.txt
batch-size=4
network-mode=0
num-detected-classes=80
interval=0
gie-unique-id=1
process-mode=1
network-type=0
cluster-mode=4
maintain-aspect-ratio=0
parse-bbox-func-name=NvDsInferParseYolo
custom-lib-path=nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
engine-create-func-name=NvDsInferYoloCudaEngineGet

[class-attrs-all]
pre-cluster-threshold=0

Second primary model profile:

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-color-format=0
#custom-network-config=**.cfg
#model-file=**.wts
model-engine-file=./lightEngine/model_b1_gpu0_fp32.engine
#int8-calib-file=calib.table
labelfile-path=./lightEngine/labels.txt
batch-size=1
network-mode=0
num-detected-classes=3
interval=0
gie-unique-id=1
process-mode=1
network-type=0
cluster-mode=2
maintain-aspect-ratio=0
parse-bbox-func-name=NvDsInferParseYolo
custom-lib-path=nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
#engine-create-func-name=NvDsInferYoloCudaEngineGet

[class-attrs-all]
nms-iou-threshold=0.45
pre-cluster-threshold=0.25
topk=300

is the second model YOLO model? if not, you should not set parse-bbox-func-name=NvDsInferParseYolo, NvDsInferParseYolo will print that warning if num-detected-classes are not the same with NUM_CLASSES_YOLO(80).

Both are yolov5s models
If I use the second master model alone, classified as 3, it is not printing this warning.
Could it be that gst-infer, the plugin itself, is having problems with the performance when using serial models, and all of them are master models?

  1. please set different gie-unique-id if there are two gie.
  2. please add log in NvDsInferParseYolo to check why there is no warning if use the second master model alone.

The source code I found was promised here

static bool NvDsInferParseCustomYolo(
    std::vector<NvDsInferLayerInfo> const& outputLayersInfo, NvDsInferNetworkInfo const& networkInfo,
    NvDsInferParseDetectionParams const& detectionParams, std::vector<NvDsInferParseObjectInfo>& objectList,
    const uint &numClasses)
{
    if (outputLayersInfo.empty())
    {
        std::cerr << "ERROR: Could not find output layer in bbox parsing" << std::endl;
        return false;
    }

    if (numClasses != detectionParams.numClassesConfigured)//int num_classes = kNUM_CLASSES;
    {
        std::cerr << "WARNING: Num classes mismatch. Configured: " << detectionParams.numClassesConfigured
                  << ", detected by network: " << numClasses << std::endl;
    }
......
}

kNUM_CLASSES==80?Is this the constant that is inferred and defined in deepstream?

  1. this constant is not used to infer, it is use to postprocess parse-bbox, it is opensource, you can find customBBoxParseFuncName in SDK.
  2. about that warning, NvDsInferParseYolo is opensource, you can write a new NvDsInferParseYolo with 3 classes, btw, why dose your YOLO model have 3 classes?

Yes classification I use my own dataset is 3 categories
It seems that this warning does not affect the detection results and I can just use it, right?

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one.
Thanks

please check if NvDsInferParseCustomYolo will use that kNUM_CLASSES, some functions will use it to parse, like NvDsInferParseYoloV2.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.