Attach_tensor_output_meta not synchronization before OSD plugin

Please provide complete information as applicable to your setup.

• Hardware Platform (GPU)
• DeepStream Version: 5.1
• TensorRT Version : 7.2
• NVIDIA GPU Driver Version 460.56
• Issue Type( questions)
• How to reproduce the issue? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)

Hello, I setup pose estimation as the secondary engine after the primary engine detection. The application I use is deepstream-app.

Pose estimation I add configure inference on person object and enable output-tensor-meta. However, the probe I add before OSD (function process_meta in deepstream_app.c does not have the user meta-type NVDSINFER_TENSOR_OUTPUT_META . Although I debug and see the attach_tensor_output_meta incase not process_full_frame attach give the output that means it has done attach the tensor to object successfully.
Seem like wait_queue_buf_probe in the secondary bin do not wait for the right thing until all secondary done processing here. which was described

Wait for all secondary inferences to complete the processing and then send the processed buffer to downstream. This is way of synchronization between all secondary infers and sending buffer once meta data from all secondary infer components got attached. This is needed because all secondary infers process same buffer in parallel.

Please check it out. Thank you

Hey, could you share your sgie config file with us?



Engine config

#1 full frame | 2 objects
## 0=FP32, 1=INT8, 2=FP16 mode

Thanks, so you didn’t set the ‘classifier-async-mode’, right?


following the condition nvinfer->output_tensor_meta && !nvinfer->classifier_async_mode

the function attach_tensor_output_meta() was call and did attach tensor to object

Yeah, I got, may I know currently where did you install the probe ?

Exactly the gie_processing_done_buf_prob. the sink of osd bin

Ok, seems a bug in DS SDK, currently, I think you can try to add the probe just on the src pad of the sgie and I guess it will work, it’s just a workaround.
For debugging the issue, would you mind to create a simple demo for us, so we can easily repro and root cause it.

1 Like

Okay, I will tried your suggest first. The simple demo will provide later. But you could start investigate the bug as the pipeline was described.

Sure, thanks. I prefer to use your demo since it will let us on the same alignment.

1 Like

Hi @bcao , I did try to add the probe just on the src pad of the sgie. This way shows the user meta data type NVDSINFER_TENSOR_OUTPUT_META was got in the probe was added. But the osd bin still doesn’t actually get the display was added in that probe. I could see the pose display one second and then disappear just a moment. So this is probably the processing of the secondary gie bin does not synchronize with osd incase using tee seperate branch in deepstream-app example.

Hey, I think it should be 2 problems here, one certain issue is that NVDSINFER_TENSOR_OUTPUT_META meta data is missing if the probe not on the src pad of nvinfer. We will use this topic to track this issue.

For 2nd one, I would like you to create a new topic to discuss the OSD related issue, but I need to confirm if it’s really a issue.

Okay, maybe you right when said that NVDSINFER_TENSOR_OUTPUT_META meta data is missing if the probe not on the src pad of nvinfer. Hope good news about this.

Hey customer, not sure if you just think adding the probe on src pad of nvinfer can be a WAR or you think we need to root cause the issue. If you choose the latter one, it’s better if you can share me a simple demo to repro the issue.

Hi, I confirm my point is that the issues NVDSINFER_TENSOR_OUTPUT_META metadata is missing if the probe not on the src pad of nvinfer is right.

and the OSD no issues anymore.

Hey customer, do we still need to support this topic, would you mind to share a simple sample with us to repro the issue if yes.

1 Like

thank you @bcao. I do not need to support this topic anymore. We could close this topic

Got, Thanks .