Onnx model on deepstream5.0: Nvinfer error: Could not find NMS layer buffer while parsing

• Hardware Platform (Jetson / GPU) Jetson Nano
• DeepStream Version 5.0
• JetPack Version (valid for Jetson only) 4.4
• TensorRT Version 7.1
• Issue Type( questions, new requirements, bugs) question

  1. Trained a SSD mobilenet v1 model using custom dataset and convert to onnx model format following instructions here----- https://github.com/dusty-nv/jetson-inference/blob/master/docs/pytorch-ssd.md
  2. checked if generated onnx model is compatible with deepstream by running /usr/src/tensorrt/bin/trtexec --onnx=/home/jetson/videostreams/samples/models/onx/ssd-mobilenet.onnx
    the output i got here is:

&&&& PASSED TensorRT.trtexec # /usr/src/tensorrt/bin/trtexec --onnx=/home/jetson/videostreams/samples/models/onx/ssd-mobilenet.onnx

which implies that the model is compatible to use with deepstrream

  1. Built custom parser from /opt/nvidia/deepstream/deepstream-5.0/sources/objectDetector_SSD
  2. modifed the follwing in config file for nvinfer as follows:

model-engine-file=samples/models/onx/ssd-mobilenet.onnx.1.1.7103.GPU.FP16.engine
labelfile-path=samples/models/onx/labels.txt
force-implicit-batch-dim=1
input-dims=3;300;300;0
batch-size=1
process-mode=1
model-color-format=0
network-mode=2
num-detected-classes=2
parse-bbox-func-name=NvDsInferParseCustomSSD
custom-lib-path=/opt/nvidia/deepstream/deepstream-5.0/sources/objectDetector_SSD/nvdsinfer_custom_impl_ssd/libnvdsinfer_custom_impl_ssd.so
interval=0
gie-unique-id=1
input-blob-name = input_0
output-blob-names=boxes;scores

and the resultant output after running the deepstream app is:

Could not find NMS layer buffer while parsing
0:00:13.542989487 20826 0x1806d5e0 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::fillDetectionOutput() <nvdsinfer_context_impl_output_parsing.cpp:725> [UID = 1]: Failed to parse bboxes using custom parse function
Segmentation fault (core dumped)

Is the issue with config parameters or the model convertion? what additional steps should be followed while converting the model? The convertion was done using this https://github.com/dusty-nv/pytorch-ssd/blob/e7b5af50a157c50d3bab8f55089ce57c2c812f37/onnx_export.py

Hi,

You will need to update the model info to the nvdsparsebbox_ssd.cpp also.

An incorrect layer name causes the error above.
Please update the NMS into the corresponding layer name.

  if (nmsLayerIndex == -1) {
    for (unsigned int i = 0; i < outputLayersInfo.size(); i++) {
      if (strcmp(outputLayersInfo[i].layerName, "NMS") == 0) {
      ...

Thanks.

Hi @AastaLLL thanks for the reply. The network i had, has outputs as scores and boxes. So i modified the NMS and NMS1 into scores and boxes and rebuilt the custom parser. Now im not getting any error. But the scripts stops saying

Segmentation fault (core dumped)

Parsed successfully as got below. But could’nt find where the error is

0:00:12.321735558 25973 0x3d53a000 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1701> [UID = 1]: deserialized trt engine from :/home/jetson/videostreams/samples/models/onx/ssd-mobilenet.onnx.1.1.7103.GPU.FP16.engine
INFO: [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_0 3x300x300
1 OUTPUT kFLOAT scores 3000x3
2 OUTPUT kFLOAT boxes 3000x4

0:00:12.326119852 25973 0x3d53a000 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1805> [UID = 1]: Use deserialized engine model: /home/jetson/videostreams/samples/models/onx/ssd-mobilenet.onnx.1.1.7103.GPU.FP16.engine
0:00:12.396903621 25973 0x3d53a000 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus: [UID 1]: Load new model:dstest1_pgie_config_onx.txt sucessfully
Warning: gst-library-error-quark: Rounding muxer output width to the next multiple of 8: 304 (5): /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvmultistream/gstnvstreammux.c(2299): gst_nvstreammux_change_state (): /GstPipeline:pipeline0/GstNvStreamMux:Stream-muxer
Segmentation fault (core dumped)

Hi,

Could you run the Deepstream pipeline with debug enabled?

$ deepstream-app -c <config> --gst-debug=5

Thanks.

Damned… Same situation here… Exactly the same. @haneesh24199 have you been able to solve that?

No buddy @foreverneilyoung . Let me know too if you solve it

Will surely do :)
Thanks

1 Like

I’m wondering how this

$ deepstream-app -c --gst-debug=5

could have been marked as “solution”. It just produces tons of bs traces and the segmentation fault is still happening, but has no trace at all…

OK, I have something running. Do you have some time to discuss/follow? I could offer my solution

Yes buddy. Please go ahead

OK, so first my setup: I’m having a well running inference solution in python for three USB cams on a jetson nano. While I’m pretty happy with the inference frame rate (30 fps per camera with optimized 4 classes networks) I was trying to achieve the same for a Raspberry PI/Coral TPU solution. Since I could not make Google’s Transfer Learning example run on neither GPU, I switched to the very nice tutorials from @dusty_nv , who very detailed and cleanly provides tutorials for TL on a Jetson Nano.

So I followed his “reduce a SSD-MobileNetV1 to 9 fruits” sample and was able to create a stripped down model. It was also possible to run this in the context of Dusty’s Jetson-Inference project.

Before I was diving into the problems of how to convert ONNX to TensorFlow to Tensorflow Lite to compiled EdgeTPU code I was thinking about using my newly created model in the a.m. context of a well running DeepStream SDK app.

OK, so I took the SSD network and created a PGIE configuration file for it. This is it at the moment and it works:

[property]
workspace-size=800
gpu-id=0
model-color-format=0
net-scale-factor=0.003921569790691137
onnx-file=/home/ubuntu/dragonfly-safety/jetson-inference/models/primary-detector-nano/ssd-mobilenet.onnx
labelfile-path=/home/ubuntu/dragonfly-safety/jetson-inference/models/primary-detector-nano/labels_onnx.txt
model-engine-file=/home/ubuntu/dragonfly-safety/jetson-inference/models/primary-detector-nano/ssd-mobilenet.onnx_b1_gpu0_fp16.engine
batch-size=1
network-mode=2
num-detected-classes=9
maintain-aspect-ratio=1
gie-unique-id=1
is-classifier=0
output-blob-names=boxes;scores
parse-bbox-func-name=NvDsInferParseCustomSSD
custom-lib-path=/home/ubuntu/dragonfly-safety/jetson-inference/models/primary-detector-nano/libnvdsinfer_custom_impl_ssd.so

You see, I already messed with the same custom bbox lib as you did and came exactly to the same result: segmentation fault.

I found this sample on the net retinanet-examples/nvdsparsebbox_retinanet.cpp at main · NVIDIA/retinanet-examples · GitHub and found, that the bbox parser alone from the naming came very much closer to what is in the SSD net.

I was not able to find a “classes” element, which is strange, since the class_id at least should be determined. But bbox and scores have been found. For now I see my camera image, always claiming to see “apples” (since I hardcoded it to class_id 1) but no visible bounding boxes on the screen. Don’t know, if those again need a configuration somewhere, for sure there is.

So for now I took the /opt/nvidia/deepstream/deepstream5.1/sources/objectDetector_SSD/nvdsinfer_custom_impl_ssd project for the make and using this content for nvdsparebbox_ssd.cpp:

/*
 * Copyright (c) 2018-2019, NVIDIA CORPORATION. All rights reserved.
 *
 * Permission is hereby granted, free of charge, to any person obtaining a
 * copy of this software and associated documentation files (the "Software"),
 * to deal in the Software without restriction, including without limitation
 * the rights to use, copy, modify, merge, publish, distribute, sublicense,
 * and/or sell copies of the Software, and to permit persons to whom the
 * Software is furnished to do so, subject to the following conditions:
 *
 * The above copyright notice and this permission notice shall be included in
 * all copies or substantial portions of the Software.
 *
 * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
 * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
 * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
 * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
 * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
 * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
 * DEALINGS IN THE SOFTWARE.
 */


#include <cstring>
#include <iostream>
#include "nvdsinfer_custom_impl.h"

#define MIN(a,b) ((a) < (b) ? (a) : (b))
#define MAX(a,b) ((a) > (b) ? (a) : (b))
#define CLIP(a,min,max) (MAX(MIN(a, max), min))

/* This is a sample bounding box parsing function for the sample SSD UFF
 * detector model provided with the TensorRT samples. */

extern "C"
bool NvDsInferParseCustomSSD (std::vector<NvDsInferLayerInfo> const &outputLayersInfo,
        NvDsInferNetworkInfo  const &networkInfo,
        NvDsInferParseDetectionParams const &detectionParams,
        std::vector<NvDsInferObjectDetectionInfo> &objectList);

/* C-linkage to prevent name-mangling */
extern "C"
bool NvDsInferParseCustomSSD (std::vector<NvDsInferLayerInfo> const &outputLayersInfo,
        NvDsInferNetworkInfo  const &networkInfo,
        NvDsInferParseDetectionParams const &detectionParams,
        std::vector<NvDsInferObjectDetectionInfo> &objectList)
{
  static int bboxLayerIndex = -1;
  static int classesLayerIndex = -1;
  static int scoresLayerIndex = -1;
  static NvDsInferDimsCHW scoresLayerDims;
  int numDetsToParse;

  /* Find the bbox layer */
  if (bboxLayerIndex == -1) {
    for (unsigned int i = 0; i < outputLayersInfo.size(); i++) {
      if (strcmp(outputLayersInfo[i].layerName, "boxes") == 0) {
        bboxLayerIndex = i;
        break;
      }
    }
    if (bboxLayerIndex == -1) {
    std::cerr << "Could not find bbox layer buffer while parsing" << std::endl;
    return false;
    }
  }

  /* Find the scores layer */
  if (scoresLayerIndex == -1) {
    for (unsigned int i = 0; i < outputLayersInfo.size(); i++) {
      if (strcmp(outputLayersInfo[i].layerName, "scores") == 0) {
        scoresLayerIndex = i;
        getDimsCHWFromDims(scoresLayerDims, outputLayersInfo[i].dims);
        break;
      }
    }
    if (scoresLayerIndex == -1) {
      std::cerr << "Could not find scores layer buffer while parsing" << std::endl;
      return false;
    }
  }

  /* Find the classes layer */
  if (classesLayerIndex == -1) {
    for (unsigned int i = 0; i < outputLayersInfo.size(); i++) {
      if (strcmp(outputLayersInfo[i].layerName, "classes") == 0) {
        classesLayerIndex = i;
        break;
      }
    }
  //  if (classesLayerIndex == -1) {
  //   std::cerr << "Could not find classes layer buffer while parsing" << std::endl;
  //   return false;
  //   }
   }  

  std::cout << "bboxLayerIndex " << bboxLayerIndex << " classesLayerIndex " << classesLayerIndex << " scoresLayerIndex " << scoresLayerIndex << std::endl;
  
  /* Calculate the number of detections to parse */
  numDetsToParse = scoresLayerDims.c;

  float *bboxes = (float *) outputLayersInfo[bboxLayerIndex].buffer;
  //float *classes = (float *) outputLayersInfo[classesLayerIndex].buffer;
  float *scores = (float *) outputLayersInfo[scoresLayerIndex].buffer;
  
  for (int indx = 0; indx < numDetsToParse; indx++)
  {
    float outputX1 = bboxes[indx * 4];
    float outputY1 = bboxes[indx * 4 + 1];
    float outputX2 = bboxes[indx * 4 + 2];
    float outputY2 = bboxes[indx * 4 + 3];
    float this_class = 0; //classes[indx];
    float this_score = scores[indx];
    float threshold = detectionParams.perClassThreshold[this_class];
    
    if (this_score >= threshold)
    {
      NvDsInferParseObjectInfo object;
      
      object.classId = 1;
      object.detectionConfidence = this_score;

      object.left = outputX1;
      object.top = outputY1;
      object.width = outputX2 - outputX1;
      object.height = outputY2 - outputY1;

      objectList.push_back(object);
    }
  }
  return true;
}

/* Check that the custom function has been defined correctly */
CHECK_CUSTOM_PARSE_FUNC_PROTOTYPE(NvDsInferParseCustomSSD);

I find it a bit clearer, also because the number of classes is derived, not hardcoded, but I didn’t compare that to the original version in order to find the problem with the segfault.

This works, my extra trace is visible (there is no classes element sigh), an apple is detected every now and then, no bbox is drawn as overlay yet.

Would be great if you could give me your opinion with this changed file.

1 Like

Thanks for the information @foreverneilyoung . I just need about two days time to try this out since i’m currently working on different project. I’ll surely get back by then.

No problem. I think I will have solved that until then :)

Well, this is a better working state, kind of a mix. I’m not sure if it makes sense at all or it just interprets random memory garbage. The reason for the crash is in the for loop, which initially ran too far IMHO. I commented it and replaced the keepCount with a value from another lib. Then I think the classId determination is not correct. This is strange, since the bounding boxes appear to be good…

/*
 * Copyright (c) 2018-2019, NVIDIA CORPORATION. All rights reserved.
 *
 * Permission is hereby granted, free of charge, to any person obtaining a
 * copy of this software and associated documentation files (the "Software"),
 * to deal in the Software without restriction, including without limitation
 * the rights to use, copy, modify, merge, publish, distribute, sublicense,
 * and/or sell copies of the Software, and to permit persons to whom the
 * Software is furnished to do so, subject to the following conditions:
 *
 * The above copyright notice and this permission notice shall be included in
 * all copies or substantial portions of the Software.
 *
 * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
 * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
 * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
 * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
 * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
 * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
 * DEALINGS IN THE SOFTWARE.
 */


#include <cstring>
#include <iostream>
#include "nvdsinfer_custom_impl.h"

#define MIN(a,b) ((a) < (b) ? (a) : (b))
#define MAX(a,b) ((a) > (b) ? (a) : (b))
#define CLIP(a,min,max) (MAX(MIN(a, max), min))

/* This is a sample bounding box parsing function for the sample SSD UFF
 * detector model provided with the TensorRT samples. */

extern "C"
bool NvDsInferParseCustomSSD (std::vector<NvDsInferLayerInfo> const &outputLayersInfo,
        NvDsInferNetworkInfo  const &networkInfo,
        NvDsInferParseDetectionParams const &detectionParams,
        std::vector<NvDsInferObjectDetectionInfo> &objectList);

/* C-linkage to prevent name-mangling */
extern "C"
bool NvDsInferParseCustomSSD (std::vector<NvDsInferLayerInfo> const &outputLayersInfo,
        NvDsInferNetworkInfo  const &networkInfo,
        NvDsInferParseDetectionParams const &detectionParams,
        std::vector<NvDsInferObjectDetectionInfo> &objectList)
{
  static int bboxLayerIndex = -1;
  static int scoresLayerIndex = -1;
  static NvDsInferDimsCHW scoresLayerDims;
  int numDetsToParse;

  /* Find the bbox layer */
  if (bboxLayerIndex == -1) {
    for (unsigned int i = 0; i < outputLayersInfo.size(); i++) {
      if (strcmp(outputLayersInfo[i].layerName, "boxes") == 0) {
        bboxLayerIndex = i;
        break;
      }
    }
    if (bboxLayerIndex == -1) {
    std::cerr << "Could not find bbox layer buffer while parsing" << std::endl;
    return false;
    }
  }

  /* Find the scores layer */
  if (scoresLayerIndex == -1) {
    for (unsigned int i = 0; i < outputLayersInfo.size(); i++) {
      if (strcmp(outputLayersInfo[i].layerName, "scores") == 0) {
        scoresLayerIndex = i;
        getDimsCHWFromDims(scoresLayerDims, outputLayersInfo[i].dims);
        break;
      }
    }
    if (scoresLayerIndex == -1) {
      std::cerr << "Could not find scores layer buffer while parsing" << std::endl;
      return false;
    }
  }

  /* Calculate the number of detections to parse */
  numDetsToParse = scoresLayerDims.c;
  
  int keepCount = numDetsToParse; //*((int *) outputLayersInfo[scoresLayerIndex].buffer);
  float *detectionOut = (float *) outputLayersInfo[bboxLayerIndex].buffer;
  int numClassesToParse = detectionParams.numClassesConfigured;
  
  for (int i = 0; i < keepCount; i++)
  {
    float* det = detectionOut + i * 7;
    int classId = det[1];

    if (classId > numClassesToParse) {
      continue;
    }
    float threshold = detectionParams.perClassPreclusterThreshold[classId];
    if (det[2] < threshold) {
      continue;
    }

    unsigned int rectx1, recty1, rectx2, recty2;
    NvDsInferObjectDetectionInfo object;

    rectx1 = det[3] * networkInfo.width;
    recty1 = det[4] * networkInfo.height;
    rectx2 = det[5] * networkInfo.width;
    recty2 = det[6] * networkInfo.height;

    object.classId = classId;


    object.detectionConfidence = det[2];

    /* Clip object box co-ordinates to network resolution */
    object.left = CLIP(rectx1, 0, networkInfo.width - 1);
    object.top = CLIP(recty1, 0, networkInfo.height - 1);
    object.width = CLIP(rectx2, 0, networkInfo.width - 1) - object.left + 1;
    object.height = CLIP(recty2, 0, networkInfo.height - 1) - object.top + 1;

    //std::cerr << det[0] << " " << (int)det[1] << " " << det[2] << " " << det[3] << " " << det[4] << " " << det[5] << " " << det[6] << " " << std::endl;
  
    objectList.push_back(object);
  }
  
   return true;
}

/* Check that the custom function has been defined correctly */
CHECK_CUSTOM_PARSE_FUNC_PROTOTYPE(NvDsInferParseCustomSSD);

Here is the solution. I was able to make it run by help of @dusty_nv.

Unfortunately the results are disappointing: I was hoping to get an inference rate over the top with this. It turns out, I cannot achieve more than 15 fps per camera for 3 cams, which would sum up to roughly 45 fps.

Note: It is most likely NOT the parse, who makes it that lame.

I have renamed the lib and all the files to reflect, that this is a custom parser for ONNX.

/*
 * Copyright (c) 2018-2019, NVIDIA CORPORATION. All rights reserved.
 *
 * Permission is hereby granted, free of charge, to any person obtaining a
 * copy of this software and associated documentation files (the "Software"),
 * to deal in the Software without restriction, including without limitation
 * the rights to use, copy, modify, merge, publish, distribute, sublicense,
 * and/or sell copies of the Software, and to permit persons to whom the
 * Software is furnished to do so, subject to the following conditions:
 *
 * The above copyright notice and this permission notice shall be included in
 * all copies or substantial portions of the Software.
 *
 * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
 * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
 * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
 * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
 * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
 * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
 * DEALINGS IN THE SOFTWARE.
 */


#include <cstring>
#include <iostream>
#include "nvdsinfer_custom_impl.h"

#define MIN(a,b) ((a) < (b) ? (a) : (b))
#define MAX(a,b) ((a) > (b) ? (a) : (b))
#define CLIP(a,min,max) (MAX(MIN(a, max), min))

/* This is a sample bounding box parsing function for the sample SSD UFF
 * detector model provided with the TensorRT samples. */

extern "C"
bool NvDsInferParseCustomONNX (std::vector<NvDsInferLayerInfo> const &outputLayersInfo,
        NvDsInferNetworkInfo  const &networkInfo,
        NvDsInferParseDetectionParams const &detectionParams,
        std::vector<NvDsInferObjectDetectionInfo> &objectList);

/* C-linkage to prevent name-mangling */
extern "C"
bool NvDsInferParseCustomONNX (std::vector<NvDsInferLayerInfo> const &outputLayersInfo,
        NvDsInferNetworkInfo  const &networkInfo,
        NvDsInferParseDetectionParams const &detectionParams,
        std::vector<NvDsInferObjectDetectionInfo> &objectList)
{

  static int bboxLayerIndex = -1;
  static int scoresLayerIndex = -1;
  static NvDsInferDimsCHW bboxLayerDims;

  /* Find the bbox layer */
  if (bboxLayerIndex == -1) {
    for (unsigned int i = 0; i < outputLayersInfo.size(); i++) {
      if (strcmp(outputLayersInfo[i].layerName, "boxes") == 0) {
        bboxLayerIndex = i;
        getDimsCHWFromDims(bboxLayerDims, outputLayersInfo[bboxLayerIndex].dims);
        break;
      }
    }
    if (bboxLayerIndex == -1) {
      std::cerr << "Could not find bbox layer buffer while parsing" << std::endl;
      return false;
    }
  }

  /* Find the scores layer */
  if (scoresLayerIndex == -1) {
    for (unsigned int i = 0; i < outputLayersInfo.size(); i++) {
      if (strcmp(outputLayersInfo[i].layerName, "scores") == 0) {
        scoresLayerIndex = i;
        break;
      }
    }
    if (scoresLayerIndex == -1) {
      std::cerr << "Could not find scores layer buffer while parsing" << std::endl;
      return false;
    }
  }
  uint32_t numBoxes = bboxLayerDims.c;
  uint32_t numCoord = bboxLayerDims.h;

  float *bbox = (float *) outputLayersInfo[bboxLayerIndex].buffer;
  float *conf = (float *) outputLayersInfo[scoresLayerIndex].buffer;
  uint32_t mNumClasses = detectionParams.numClassesConfigured;
  
  for (uint32_t n = 0; n < numBoxes; n++)
  {
    uint32_t maxClass = 0;
    float    maxScore = -1000.0f;

    for (uint32_t m=1; m<mNumClasses; m++) {
      const float score = conf[n * mNumClasses + m];
      if (score < detectionParams.perClassThreshold[m])
					continue;
      if( score > maxScore ) {
					maxScore = score;
					maxClass = m;
			}
    }
    // check if there was a detection
		if (maxClass <= 0)
				continue; 

    const float* coord = bbox + n * numCoord;
    NvDsInferObjectDetectionInfo object;
    object.classId = maxClass;
    object.detectionConfidence = maxScore;
    object.left = coord[0] * networkInfo.width;
    object.top = coord[1] * networkInfo.height;
    object.width = coord[2] * networkInfo.width - coord[0] * networkInfo.width;
    object.height = coord[3] * networkInfo.height - coord[1] * networkInfo.height;
    objectList.push_back(object);
  }
  
   return true;
}

/* Check that the custom function has been defined correctly */
CHECK_CUSTOM_PARSE_FUNC_PROTOTYPE(NvDsInferParseCustomONNX);