Will be deepstream-infer-tensor-meta-test in Python

Hi.
I would really appreciate to know if you are including in your roadmap deepstream_infer_tensor_meta_test in python. I have tested test 1 of deepstream with h264 video with peoplenet to detect faces. I would like to apply facenet to the face, but I really do not find how to do that, even reading test2 (I have a tensorRT engine from facenet, but I am not sure how to make the inference). Please, help me…

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) -> GDX-A100
• DeepStream Version -> 5.0
• JetPack Version (valid for Jetson only) -> not apply
• TensorRT Version -> 7.1.3.4
• NVIDIA GPU Driver Version (valid for GPU only) NVIDIA Driver Version: 450.51.06
• Issue Type( questions, new requirements, bugs) -> New requirements

• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Do you mean you want to use two models with deepstream? The first model is peoplenet which will detect people faces and the second model is facenet which identify some characters of the faces?

The sample of deepstream-ssd-parser in python is also a sample of “output-tensor-meta” enabled. You may refer to it instead of deepstream-infer-tensor-meta-test