@preronamajumder
I think you have to implement your own parse function to handle network outputs.
You can find this piece of code (this piece of code is from DeepStream 5.0, but I think DeepStream 4.0 should be similar)
/* Call custom parsing function if specified otherwise use the one
* written along with this implementation. */
if (m_CustomClassifierParseFunc)
{
if (!m_CustomClassifierParseFunc(outputLayers, m_NetworkInfo,
m_ClassifierThreshold, attributes, attrString))
{
printError("Failed to parse classification attributes using "
"custom parse function");
return NVDSINFER_CUSTOM_LIB_FAILED;
}
}
else
{
if (!parseAttributesFromSoftmaxLayers(outputLayers, m_NetworkInfo,
m_ClassifierThreshold, attributes, attrString))
{
printError("Failed to parse bboxes");
return NVDSINFER_OUTPUT_PARSING_FAILED;
}
}
This piece of code indicates that you can implement a customized classifier parse function of your own.
And then you have to add following configurations to [property]
like this so that DeepStream will call your customized parse function:
parse-classifier-func-name=name_of_your_own_customized_parse_function
custom-lib-path=dir_to_your_own_cpp_library/your_own_cpp_library.so
And your customized function would be like this (This is for DeepStream 5.0, I am not sure 4.0 is the same)
extern "C" bool
name_of_your_own_customized_parse_function(
std::vector<NvDsInferLayerInfo> const& outputLayersInfo,
NvDsInferNetworkInfo const& networkInfo, float classifierThreshold,
std::vector<NvDsInferAttribute>& attrList, std::string& attrString)
{
// TODO: Add your own parse logics here
return true;
}
There are examples of customized parse functions here (The only difference is that they customize bbox parsing functions for detection networks):
objectDetector_FasterRCNN
objectDetector_SSD
objectDetector_Yolo