MTCNN with Deepstream 6.2

Please provide complete information as applicable to your setup.

**• Hardware Platform :- dGPU
**• DeepStream Version :- 6.2

Im trying to run a deepstream application with MTCNN model. I got the .engine files by following (this repo ). I will upload the engine files.

My pgie config file looks like this

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-file=det1_relu..caffemodel
proto-file=det1_relu.prototxt
model-engine-file=det1.engine
labelfile-path=labels.txt
#int8-calib-file=../../models/Primary_Detector/cal_trt.bin
batch-size=1
process-mode=1
model-color-format=0
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=0
num-detected-classes=1
interval=0
gie-unique-id=1
#output-blob-names=Layer18_cov;Layer18_bbox
output-blob-names=conv4-2;prob1
#parse-bbox-func-name=NvDsInferParseCustomResnet
#custom-lib-path=/path/to/libnvdsparsebbox.so
## 1=DBSCAN, 2=NMS, 3= DBSCAN+NMS Hybrid, 4 = None(No clustering)
cluster-mode=2
#scaling-filter=0
#scaling-compute-hw=0

Following are the pipeline logs.

0:00:03.639508289 41426 0x563b485e0c10 INFO                 nvinfer gstnvinfer.cpp:751:gst_nvinfer_logger:<SGIE> NvDsInferContext[UID 3]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1909> [UID = 3]: deserialized trt engine from :<path>.engine
WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 2
0   INPUT  kFLOAT input.1         3x112x112       
1   OUTPUT kFLOAT 1333            512             

0:00:03.736699844 41426 0x563b485e0c10 INFO                 nvinfer gstnvinfer.cpp:751:gst_nvinfer_logger:<SGIE> NvDsInferContext[UID 3]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2012> [UID = 3]: Use deserialized engine model: <path>.engine
0:00:03.739803463 41426 0x563b485e0c10 INFO                 nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<SGIE> [UID 3]: Load new model:<config_file>.txt sucessfully
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
gstnvtracker: Batch processing is ON
gstnvtracker: Past frame output is OFF
[NvMultiObjectTracker] Initialized
0:00:06.922825667 41426 0x563b485e0c10 INFO                 nvinfer gstnvinfer.cpp:751:gst_nvinfer_logger:<PGIE> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1909> [UID = 1]: deserialized trt engine from :/<path>.engine
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT data            3x710x384       
1   OUTPUT kFLOAT conv4-2         4x350x187       
2   OUTPUT kFLOAT prob1           2x350x187       

0:00:07.033810197 41426 0x563b485e0c10 INFO                 nvinfer gstnvinfer.cpp:751:gst_nvinfer_logger:<PGIE> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2012> [UID = 1]: Use deserialized engine model:<path>.engine
0:00:07.035994407 41426 0x563b485e0c10 INFO                 nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<PGIE> [UID 1]: Load new model:models/config_mtcnn.txt sucessfully
Running...
1
2
3
0:00:07.369208012 41426 0x563b475880c0 ERROR                nvinfer gstnvinfer.cpp:745:gst_nvinfer_logger:<PGIE> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::parseBoundingBox() <nvdsinfer_context_impl_output_parsing.cpp:59> [UID = 1]: Could not find output coverage layer for parsing objects
0:00:07.369224663 41426 0x563b475880c0 ERROR                nvinfer gstnvinfer.cpp:745:gst_nvinfer_logger:<PGIE> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::fillDetectionOutput() <nvdsinfer_context_impl_output_parsing.cpp:735> [UID = 1]: Failed to parse bboxes

Do I need to write a custom parser for this model as well ?

config_mtcnn.txt (1.1 KB)

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

Yes. You need to write a custom parser. You can refer to the link below: nvdsinfer_custombboxparser_tao.cpp.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.