Hi, I am developing deepstream applications using the python api. In particular we are deploying multi-stage object detectors, which is running secondary detectors that run on the bounding box outputs of the object detector. Most of the models are using onnx to tensorrt converted models, with custom source and sink probes.
I have a question about the “best” practice for collecting the results of the individual later stage detectors (e.g. keypoints, action detection, etc). The current way that I use is to check the usermeta of each objectmeta in the end and read out the outputs. However, this gets a bit messy, because there is no direct relation which user meta is produced by which detector.
Maybe a better way to handle this would be to process the outputs of each detector after it is called, which could be done with the help of a source pad probe. However then I would need some custom metadata field where I can “store” the results (For example a field for the detected keypoints for each object). However, I could not find a fitting example how this could be implemented in this way.
Can you make a suggestion what would be a good way to handle this?
• Hardware Platform (Jetson / GPU) Both
• DeepStream Version 6.1.1
• JetPack Version (valid for Jetson only) Latest
• TensorRT Version 8.4
• NVIDIA GPU Driver Version (valid for GPU only) 520
• Issue Type( questions, new requirements, bugs) Questions.
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description) Nvinfer plugin