Optimal Handling of Tensor Metadata in Deepstream Service Maker: BufferProbe vs DataReceiver

• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version 7.0 (docker image: nvcr.io/nvidia/deepstream:7.0-sample-multiarch )
• NVIDIA GPU Driver Version (valid for GPU only) 535.171.04


I am currently working with the Deepstream Service Maker C++ APIs.
I have encountered a question regarding the optimal way to handle tensor metadata from models such as ReIdentificationNet and BodyPoseNet.

In the file buffer_probe.hpp, I came across the following documentation:

 * @file
 * <b>Service maker buffer probe definitions </b>
 * @b Description: Buffer probe offers a mechnism for peeking the output buffers.
 * Both the data and the metadata carried by the buffer is accessible through
 * buffer probe.
 * Yet it is not recommended to perform complex processing within a probe, which
 * could potentially disrupt the running pipeline. For data processing purpose,
 * data receiver is the right choice. @see DataReceiver.

My goal is to access the tensor metadata from these models and add the output as NvDsEmbedding and NvDsJoints to NvDsUserMetaList inside the NvDsObjectMeta.

Should I perform these operations using the BufferProbe or the DataReceiver?

Additionally, I noticed that data_receiver.hpp is included in the deepstream:7.0-sample-multiarch Docker image for Jetson but not for x86. Can someone clarify this discrepancy?

Thank you

As the documentation said Yet it is not recommended to perform complex processing within a probe, If your operation is not very time-consuming, it is recommended to handle it in probe.

We will check this. Thanks

Hi @alfonso-corrado , we’ll provide the data_receiver.hpp file on Jetson in the future version. You can try to port this file from x86 to Jetson for now.