Here is my config:
• Hardware Platform (Jetson / GPU): 2080Ti or TITAN V
• DeepStream Version: 5.0
• TensorRT Version: 7.0.0-1+cuda10.2
• NVIDIA GPU Driver Version: 440.33.01
Recently I tried to build a face recognition detector based on the sample program deepstream-test5. The model of the face recognition detector comes from: RetinaFace. I compiled the corresponding .engine file according to the project’s READMA, completed and compiled the .so file containing the “parse-bbox-func” function according to the DS SDK, and modified the config.txt file.
But during execution, I found that my program could hardly make any inferences. I looked at the output and found that the model’s confidence in almost all Anchor predictions is 0.5. After checking the entire process of the entire pipeline, I found that the output of the model before the last Softmax layer is almost all 0, which means that my model parameters are hardly loaded.
My configuration is basically the same as SampleSSD, and the corresponding decoder and parser are implemented. But when I set my engine file and config file to the SSD model, it can run normally.
Did I overlook any additional details that need attention? I noticed that the SSD model uses .uff, and my model uses a single engine file. Is there any impact on this?
Here is part my my config file