Very low confidence score inference when using DeepStream with SRCFD

**• Hardware Platform (Jetson / GPU): Jetson Nano
**• DeepStream Version: 6.0.1
**• JetPack Version: 4.6
• TensorRT Version 8.0
**• Issue Type: Bug
**• Iam using deepstream to make face recognition app.
The process will be to use scrfd to detect the face, then use arcface to extract the vector
currently having problems at custom parsing output for scrfd model. Below is the c++ code to render the library, the model I use is srcfd onnx, the model I use normally when deploying with triton server, but when deploying with deepstream (I convert to .trt using trtexec) gives very low confidence score

model.onnx (3.2 MB)
nvds_parsebbox_scrfd.cpp (5.2 KB)
labels.txt (9 Bytes)
srcfd_config.txt (537 Bytes)

command i use to convert model onnx to .trt
/usr/src/tensorrt/bin/trtexec --onnx=./models/srcfd_2/model.onnx --saveEngine=./models/srcfd_2/model.engine --explicitBatch --workspace=14336 --fp16 --minShapes=input.1:1x3x640x640 --optShapes=input.1:1x3x640x640 --maxShapes=input.1:1x3x640x640 --shapes=input.1:1x3x640x640

image
image

when i log the detection results, get very low confidence score

There is no update from you for a period, assuming this is not an issue any more. Hence we are closing this topic. If need further support, please open a new one. Thanks.

please ensure the preprocess is the same with triton server’s. please refer to nvinfer for the parameter explaination.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.