• Hardware Platform: Telsa T4
• DeepStream Version 5.0
• TensorRT V7.0
• NVIDIA GPU Driver Version 450.57
I have a face alignment custom model deployed successfully to Triton Inference Server, with 2 inputs:
- a 112x112x3 face image
- a 5 point landmark of that face image
The output of this model is an aligned face image.
I’m trying to deploy this custom model to the nvinferserver of DeepStream 5 with the upstream element is a primary face detection with the landmarks model.
The problem is I don’t know how to pass face landmarks (in form of NvInferTensorMeta from the upstream face detection model) as a second input for this Triton custom model.
The Gst-nvinferserver File Configuration Specifications seem not to mention any information of how to mapping upstream tensor meta with Triton model’s inputs.
Please give me advice. Thanks.