I am using the serial model following the case of deepstrem-test2, but I set the second level model as the main model.
The inference is that there is the following warning, is this warning a pipeline problem? How can I fix it?
WARNING: Num classes mismatch. configured: 3, detected by network: 80
I set all model parameters network-type to 0, set process-mode to 1.
The first level primary model is 80 classes and the second level primary model is 3 classes.
- deepstream-app version 6.1.0
- DeepStreamSDK 6.1.0
- CUDA Driver Version: 11.4
- CUDA Runtime Version: 11.0
- TensorRT Version: 8.2
- cuDNN Version: 8.4
- libNVWarp360 Version: 2.0.1d3
- device on A6000
Do you use the same model for SGIE the same as PGIE?
Your model has an output of 80 classes, and num-detected-classes seems to be set as 3, you need adapt the setting to 80 to match the model.
It is the same class of model “yolov5s”
The classification detected in the first primary model is 80
in the second primary model of the serial series is 3.
The settings are no problem！
I think it is in deepstream that the serial model second model detection goes to read the output classification of the first model, my first model output is 80 classifications, but my second model classification is only 3 classifications, so it issues the warning above.
This warning should not exist because I set both models of the serial model to be the primary model.
The first master model profile：
Second primary model profile：
is the second model YOLO model? if not, you should not set parse-bbox-func-name=NvDsInferParseYolo, NvDsInferParseYolo will print that warning if num-detected-classes are not the same with NUM_CLASSES_YOLO(80).
Both are yolov5s models
If I use the second master model alone, classified as 3, it is not printing this warning.
Could it be that gst-infer, the plugin itself, is having problems with the performance when using serial models, and all of them are master models?
The source code I found was promised here
static bool NvDsInferParseCustomYolo(
std::vector<NvDsInferLayerInfo> const& outputLayersInfo, NvDsInferNetworkInfo const& networkInfo,
NvDsInferParseDetectionParams const& detectionParams, std::vector<NvDsInferParseObjectInfo>& objectList,
const uint &numClasses)
std::cerr << "ERROR: Could not find output layer in bbox parsing" << std::endl;
if (numClasses != detectionParams.numClassesConfigured)//int num_classes = kNUM_CLASSES;
std::cerr << "WARNING: Num classes mismatch. Configured: " << detectionParams.numClassesConfigured
<< ", detected by network: " << numClasses << std::endl;
kNUM_CLASSES==80?Is this the constant that is inferred and defined in deepstream?
Yes classification I use my own dataset is 3 categories
It seems that this warning does not affect the detection results and I can just use it, right?
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one.
please check if NvDsInferParseCustomYolo will use that kNUM_CLASSES， some functions will use it to parse, like NvDsInferParseYoloV2.
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.