Can't deploy my custom classifier on deepStream

I am trying to run my keras .h5 model on deepstream 5. I took the provided python sample deepstream-test1 as a base code and trying to change it in order to fit my model needs. I did the following:

  • convert my model to onnx model then converted it to engine model.
  • create a labels.txt file for my classes
  • change the configuration file dstest1_pgie_config.txt to be >

process-mode=1 #primary
network-type=1 #classifier

The code is running without errors but without any output too. When I print frame_meta.bInferDone it gives me zero. Why is that?

I am using GeForce GTX 1650. TensorRT Driver Version 450.51.06 CUDA Version 11.0.

Thank you.

Hey, is your model a classifier model? also seems you need to customize the post process parser referring

Yes my model is a 2 class classifier. I don’t see how the document you are referring to is relevant to my problem. Yes I think I should change my parser function. Can you point me to any sample code for classifier output parser.
Thank you.

My referrence link is for customized detection post process parser which similar as classifier, I think it’s simple to implement your own parser based on the sample.

Also you can refer c/c++ code for how to customize classifier parser, you can refer /opt/nvidia/deepstream/deepstream-5.0/sources/libs/nvdsinfer_customparser/nvdsinfer_customclassifierparser.cpp

Okay will take a look. Thank you so much.

I changed the labels variable in

Then added the .so file and the function name to the config file like so

process-mode=1 #primary
network-mode=1 #FP32
network-type=1 #classifier

Now I have this error

I debugged the output from nvdsinfer_customclassifierparser.cpp and it is parsing the output correctly. Now I want to read this output from the python code. I am using the same python script as the one in

1 Like

Have you tried this with C/C++ sample to see if the issue persist?

No I didn’t. But I want to work in python

1 Like

Yeah, but we should make sure the lib can work well.

Okay will try.

Hi fadwa.fawzy,

Is this still an issue to support? Any result can be shared?


No. Didn’t try yet.

Hello, I made a custom parsing function for my classifier
I assigned the values where the sample did
but when I print this:

print(f"result_class_id {current_obj_classification.result_class_id}, "
                             f"label_id:{current_obj_classification.label_id} "
                             f"number of labels {obj_class_meta.num_labels} "
                             # f"\nlabel {current_obj_classification.result_label}, "
                             f"result label length {len(current_obj_classification.result_label)}\n"
                             f"result probability {current_obj_classification.result_prob}"

in my python code I get

result_class_id 0, label_id:0 number of labels 1 result label length 128
 result probability 0.0

this is the function I call in my config

(the model itself has a softmax (None,1) but I want to express two classes with it like it shows here, does that affect the output? )

    bool NvDsInferClassiferParseCustomSoftmax (std::vector<NvDsInferLayerInfo> const &outputLayersInfo,
            NvDsInferNetworkInfo  const &networkInfo,
            float classifierThreshold,
            std::vector<NvDsInferAttribute> &attrList,
            std::string &descString)
        /* Get the number of attributes supported by the classifier. */
        unsigned int numAttributes = outputLayersInfo.size();

        /* Iterate through all the output coverage layers of the classifier.
    //    std::cout << "number of attributes "<< numAttributes << std::endl; 
        for (unsigned int l = 0; l < numAttributes; l++)
            /* outputCoverageBuffer for classifiers is usually a softmax layer.
             * The layer is an array of probabilities of the object belonging
             * to each class with each probability being in the range [0,1] and
             * sum all probabilities will be 1.
            NvDsInferDimsCHW dims;
            getDimsCHWFromDims(dims, outputLayersInfo[l].inferDims);
            unsigned int numClasses = dims.c;

            float *outputCoverageBuffer = (float *) outputLayersInfo[l].buffer;
            // float maxProbability = 0;
            bool attrFound = false;
            NvDsInferAttribute attr;
            /*  Unlike the original function: this function is made for a classifier with one probability and 2 classes 
            * 0 emergency 1 non_emergency, so instead of looping over classes we'll just compare the confidenece to the 
            * threshold where (number> thresh) => none emergency | (number < thresh) => emergency */
            float probability = outputCoverageBuffer[0]; //[c];
            if (probability > classifierThreshold)
                attrFound = true;
                attr.attributeIndex = l; //l;
                attr.attributeValue = 1; //c;
                attr.attributeConfidence = (probability-classifierThreshold) / (1-classifierThreshold);
            { // <=  thresh
                attrFound = true;
                attr.attributeIndex = l;
                attr.attributeValue = 0;
                attr.attributeConfidence = (classifierThreshold-probability)/classifierThreshold;
            // std::cout << "***************************" << std::endl;
            // std::cout << "Layer name " << outputLayersInfo[l].layerName  << "\nattribute index " << attr.attributeIndex << 
            // " attribute value " << attr.attributeValue <<
            // " calculated confidence " << attr.attributeConfidence << " original confidence " << probability << std::endl;
            // std::cout << "***************************" << std::endl << std::endl;

            if (attrFound)
                if (labels.size() > attr.attributeIndex &&
                        attr.attributeValue < labels[attr.attributeIndex].size())
                    attr.attributeLabel =
                    attr.attributeLabel = nullptr;
                if (attr.attributeLabel)
                    descString.append(attr.attributeLabel).append(" ");

        return true;

@mai.algendy could you please help to create a new topic for your issue, we would like one topic to track one issue, I don’t think it’s the same issue as the original topic.