Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
Jetson NX
• DeepStream Version
Deepstream 5.0
• JetPack Version (valid for Jetson only)
JetPack 4.4 (L4T 32.4.4)
• TensorRT Version
TensorRT 7.1.3.0
• NVIDIA GPU Driver Version (valid for GPU only)
CUDA 10.2.89, CUDNN 8.0, Driver unknown
• Issue Type( questions, new requirements, bugs)
For face recognition module, we have created a custom parser function for retinaface. It works fine and we can see the output in the custom parser function. We are following the example given in https://docs.nvidia.com/metropolis/deepstream/4.0/dev-guide/DeepStream_Development_Guide/baggage/nvdsinfer__custom__impl_8h.html.
First question, we have no idea on how to pass the bbox and 5 landmarks into next stage or into metadata? For example, there was no clear instruction on how to pass the objectList (as following function) into metadata.
typedef bool (* NvDsInferParseCustomFunc) (
std::vector const &outputLayersInfo,
NvDsInferNetworkInfo const &networkInfo,
NvDsInferParseDetectionParams const &detectionParams,
std::vector &objectList);
Second question is how to do a face alignment or face warping in between pgie and sgie as shown in the attached image for face recognition pipeline?
Third question is how to produce a customize output in sgie where we need the face features for doing face matching? You can think the features as 128 numbers or 512 numbers from secondary classifier.
It would be good if you can point us a direct example on how to achieve this goal.
Here are few posts we have followed but still we have no clue on how to achieve this goal.
Thanks.