Please provide complete information as applicable to your setup.
• Hardware Platform (GPU)
• DeepStream Version: 5.1
• TensorRT Version : 7.2
• NVIDIA GPU Driver Version 460.56
• Issue Type( questions)
• How to reproduce the issue? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
Hello, I setup pose estimation as the secondary engine after the primary engine detection. The application I use is deepstream-app.
Pose estimation I add configure inference on person object and enable
output-tensor-meta. However, the probe I add before OSD (function process_meta in deepstream_app.c does not have the user meta-type NVDSINFER_TENSOR_OUTPUT_META . Although I debug and see the attach_tensor_output_meta incase
not process_full_frame attach give the output that means it has done attach the tensor to object successfully.
wait_queue_buf_probe in the secondary bin do not wait for the right thing until all secondary done processing here. which was described
Wait for all secondary inferences to complete the processing and then send the processed buffer to downstream. This is way of synchronization between all secondary infers and sending buffer once meta data from all secondary infer components got attached. This is needed because all secondary infers process same buffer in parallel.
Please check it out. Thank you