How to probe "pgie_pad_buffer_probe" with deepstream-app.c

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version 5.0
• JetPack Version (valid for Jetson only)
• TensorRT Version 7.2.1
• NVIDIA GPU Driver Version (valid for GPU only) 450.102.04
• Issue Type( questions, new requirements, bugs) Question
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)

Add following in deepstream-app.c -
NVGSTDS_ELEM_ADD_PROBE(pipeline->common_elements.primary_tensor_buffer_probe_id,
pipeline->common_elements.primary_gie_bin.bin, “src”,
pgie_pad_buffer_probe, GST_PAD_PROBE_TYPE_BUFFER,
&pipeline->common_elements);

• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Hi,

I want to use tensor-meta-data output in deepstream-app.c
I have checked “deepstream-infer-tensor-meta-test” for the same.
I have enabled “output-tensor-meta=1” in config file and used “pgie_pad_buffer_probe()” function in deepstream-app.c.

My question is -
is only configuring “output-tensor-meta=1”, i can get tensor data out as gst buffer in “pgie_pad_buffer_probe()”? or i need to probe this api like-

if(config->primary_gie_config.enable){
NVGSTDS_ELEM_ADD_PROBE(pipeline>common_elements.primary_tensor_buffer_probe_id, pipeline->common_elements.primary_gie_bin.bin, “src”,
pgie_pad_buffer_probe, GST_PAD_PROBE_TYPE_BUFFER,
&pipeline->common_elements);

inside “deepstream-app.c”?

As if i do that , i get “Segmentation fault”. And if not, then not able to access pgie_pad_buffer_probe().

Thanks

You must enable the config if you want to access the output tensor.

But I think you can access it in deepstream_app.c → gie_primary_processing_done_buf_prob instead of install another probe.

Hi thanks for reply.

I had checked previous post in deepstream as well.
There is mentioned if i want to get tensor-infer-output buffer, i must include “output-tensor-meta=1” in config file and tensor data will appear in “pgie_pad_buffer_probe()” API.

I am doing the same. I just wanted to know how to probe “pgie_pad_buffer_probe” in deepstream-app.c? as it is probed in “deepstream-infer-tensor-meta-test”…

As you mentioned above, can i get the tensor out in “gie_primary_processing_done_buf_prob” also?

Yes, you can get the raw tensor if the probe installed on the src pad of nvinfer plugin.
For how to install the probe, you can check following code in deepstream_app.c

NVGSTDS_ELEM_ADD_PROBE (pipeline->common_elements.         primary_bbox_buffer_probe_id,         pipeline->common_elements.primary_gie_bin.bin, "src",         gie_primary_processing_done_buf_prob, GST_PAD_PROBE_TYPE_BUFFER,         pipeline->common_elements.appCtx);