Skip postprocessing when using nvinfer vs inferserver

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version 6.1
• JetPack Version (valid for Jetson only)
• TensorRT Version 8.2.5.1
• NVIDIA GPU Driver Version (valid for GPU only) 510
• Issue Type( questions, new requirements, bugs) question

Hi, I am trying to port my pgie from nvinferserver to nvinfer but I met a weird problem for the post-processing step. Acutually, in nvinferserver, I used postprocess { other {} } to skip postprocessing and it works fine. However, when using nvinfer with the following config:

[property]
gpu-id=0
net-scale-factor=0.017
offsets=123.675;116.28;103.53
model-engine-file=/workspaces/CrowdCounting-P2PNet/p2pnet_engine.trt
batch-size=1
process-mode=1
model-color-format=0
network-mode=0
gie-unique-id=1
output-blob-names=count;pred_logits;pred_points
output-tensor-meta=1
force-implicit-batch-dim=1
#scaling-filter=0
#scaling-compute-hw=0

it seems that it uses some default postprocessing steps, which are unexpected:

0:00:02.071875574 17245      0x3dee640 ERROR                nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::parseBoundingBox() <nvdsinfer_context_impl_output_parsing.cpp:59> [UID = 1]: Could not find output coverage layer for parsing objects
0:00:02.071915254 17245      0x3dee640 ERROR                nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::fillDetectionOutput() <nvdsinfer_context_impl_output_parsing.cpp:735> [UID = 1]: Failed to parse bboxes
Segmentation fault (core dumped)

I read the post Skip postprocessing when using nvinfer but it doesn’t seem to provide a clear config file. Can you clarify this? thank you.

Hi @hoangtnm.cse ,
Is your model a classifier / detector / segmentation model?
Have a look at the doc here: Gst-nvinfer — DeepStream 6.1 Release documentation
There are a few parameters such as parse-classifier-func-name, parse-bbox-func-name, parse-bbox-instance-mask-func-name that are used to specify post processing functions. I believe they have some default value. For instance, softmax is used for classification.

Hi @hoangtnm.cse , you can refer the link you attached to set the model as a classifier.

network-type=1
1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.