DeepStream SDK FAQ

30.[DSx_All_App] How to parse tensor output layers in the customized post-processing for nvinfer and nvinferserver?
In nvinferserver grpc mode, the layers data may reach randomly. We suggest using the following method to parse tensor output layers.

bool NvDsInferParseCustom(std::vector<NvDsInferLayerInfo> const &outputLayersInfo, 
NvDsInferNetworkInfo  const &networkInfo,
NvDsInferParseDetectionParams const &detectionParams, 
std::vector<NvDsInferInstanceMaskInfo> &objectList) {
    auto layerFinder = [&outputLayersInfo](const std::string &name)
        -> const NvDsInferLayerInfo *{
        for (auto &layer : outputLayersInfo) {
            if (layer.layerName && name == layer.layerName) {
                return &layer;
            }
        }
        return nullptr;
    };

    /* take layer names generate_detections and mask_fcn_logits/BiasAdd for example. */

    const NvDsInferLayerInfo *detectionLayer = layerFinder("generate_detections");
    const NvDsInferLayerInfo *maskLayer = layerFinder("mask_fcn_logits/BiasAdd");

    if (!detectionLayer || !maskLayer) {
        std::cerr << "ERROR: some layers missing or unsupported data types "
                << "in output tensors" << std::endl;
        return false;
    }
    ......
}

related topics
[Nvinfer's results are different from nvinferserver]
[Running Yolov5 Model in triton inference server with GRPC mode to work with Deepstream]