Hi,
I have a classify ONNX model, I want to use it as secondary-gie in deepstream.
I wrote a classifier output parsing function for this model, it complied ok. But when i run it with my deepstream-app, core dump happend.
Is my parsing function wrong? Or what i need to do in deepstream-app?
This is my parsing function:
#include <cstring>
#include <iostream>
#include "nvdsinfer_custom_impl.h"
/* This is a sample classifier output parsing function */
/* C-linkage to prevent name-mangling */
extern "C" bool NvDsInferClassifierParseCustomReLU(std::vector<NvDsInferLayerInfo> const &outputLayersInfo,
NvDsInferNetworkInfo const &networkInfo,
float classifierThreshold,
std::vector<NvDsInferAttribute> &attrList,
std::string &descString);
extern "C" bool NvDsInferClassifierParseCustomReLU(std::vector<NvDsInferLayerInfo> const &outputLayersInfo,
NvDsInferNetworkInfo const &networkInfo,
float classifierThreshold,
std::vector<NvDsInferAttribute> &attrList,
std::string &descString)
{
/* Get the number of attributes supported by the classifier. */
unsigned int numAttributes = outputLayersInfo.size();
/* Iterate through all the output coverage layers of the classifier.
*/
for (unsigned int l = 0; l < numAttributes; l++)
{
NvDsInferDimsCHW dims;
getDimsCHWFromDims(dims, outputLayersInfo[l].inferDims);
unsigned int numClasses = dims.c;
float *outputCoverageBuffer = (float *)outputLayersInfo[l].buffer;
bool attrFound = false;
NvDsInferAttribute attr;
int labelIndex = -1;
float maxConfidence = -1;
std::string label;
for (unsigned int k = 0; k < numClasses; k++)
{
float probability = outputCoverageBuffer[k];
if(probability < classifierThreshold)
continue;
if(probability > maxConfidence)
{
labelIndex = k;
maxConfidence = probability;
label = std::to_string(labelIndex);
}
// std::cout << "probability:" << probability << std::endl;
}
if (labelIndex >= 0)
{
attrFound = true;
attr.attributeIndex = labelIndex; //l;
attr.attributeValue = 0; //c;
attr.attributeConfidence = maxConfidence;
attr.attributeLabel = (char*) label.c_str();
std::cout << "labelIndex:" << labelIndex
<< " confidence:" << maxConfidence
<< " label:" << attr.attributeLabel
<< std::endl;
}
else
{ // <= thresh
attrFound = true;
attr.attributeIndex = 0;
attr.attributeValue = 0;
attr.attributeConfidence = 0;
attr.attributeLabel = "";
}
if(attrFound){
attrList.push_back(attr);
if (attr.attributeLabel)
descString.append(attr.attributeLabel).append(" ");
}
}
return true;
}
/* Check that the custom function has been defined correctly */
CHECK_CUSTOM_CLASSIFIER_PARSE_FUNC_PROTOTYPE(NvDsInferClassifierParseCustomReLU);
And this is my error:
labelIndex:15 confidence:5.72152 label:15
free(): invalid pointer
run.sh: line 7: 10406 Aborted (core dumped)
Error happen when I push value into attrList
• Hardware Platform (Jetson / GPU) Xavier NX
• DeepStream Version 5.1
• JetPack Version (valid for Jetson only) 4.5.1
• TensorRT Version 7.1
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs) questions, bugs
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)