Custom application : my parse BBox function problem

I’m trying to run this model (from this github https:/-/github.com/AIZOOTech/FaceMaskDetection ) by using deepstream-app with these file. [can’t upload more than 3 links, please delete ‘-’ between / and / to access this repo]

deepstream_app_config_ssd_cam.txt (2.3 KB)
config_infer_primary_ssd_cam.txt (3.4 KB)
[cxz_labels.txt ](it’s just text file with string Mask and NoMask [can’t upload more than 3 links]) (11 Bytes)
and .engine file model_mask - Google Drive

at first time, i got problem about transform a model to .engine file. but that already solved.
next step I found that I need to custom my own parse-bbox-function, so i try to change code in a Deepstram example in this path
[/opt/nvidia/deepstream/deepstream-5.0/sources/objectDetector_SSD]
working with file name “nvdsinfer_custom_impl_ssd_cam/nvdsparsebbox_ssd.cpp”
but i can’t figure what i got from the buffer and how to get it properly.

How can i parse the class and location(top/left/height/width) to deepstream and my output properly.

The Gst-nvinfer plugin does inferencing on input data using NVIDIA® TensorRT™. Pls refer https://docs.nvidia.com/metropolis/deepstream/plugin-manual/index.html#page/DeepStream%20Plugins%20Development%20Guide/deepstream_plugin_details.3.01.html#wwpID0E0OFB0HA for more details.
In addition, there are many examples for customizing post process parser, you can refer the yolo/ssd/frcnn sample packaged in DS, also you can refer GitHub - NVIDIA-AI-IOT/deepstream_tao_apps: Sample apps to demonstrate how to deploy models trained with TAO on DeepStream