Access tensor metadata with Service Maker C++ APIs

• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version 7.0 (docker image: nvcr.io/nvidia/deepstream:7.0-sample-multiarch )
• NVIDIA GPU Driver Version (valid for GPU only) 535.171.04

Hi,

I am currently working with the Deepstream Service Maker C++ APIs.
I need to extract embedding features of people using the ReIdentificationNet model. I am utilizing the Gst-nvinfer plugin with the following parameters:

output-tensor-meta: 1
network-type: 100

With the Deepstream C APIs I would use a probe like this:

#include <cuda_runtime_api.h>
#include "gstnvdsmeta.h"
#include "nvdsmeta_schema.h"

static GstPadProbeReturn
body_embedding_gie_src_pad_buffer_probe(GstPad *pad, GstPadProbeInfo *info, gpointer u_data)
{
  static guint use_device_mem = 0;

  NvDsBatchMeta *batch_meta = gst_buffer_get_nvds_batch_meta(GST_BUFFER(info->data));

  /* Iterate each frame metadata in batch */
  for (NvDsMetaList *l_frame = batch_meta->frame_meta_list; l_frame != NULL; l_frame = l_frame->next)
  {
    NvDsFrameMeta *frame_meta = (NvDsFrameMeta *)l_frame->data;

    /* Iterate object metadata in frame */
    for (NvDsMetaList *l_obj = frame_meta->obj_meta_list; l_obj != NULL; l_obj = l_obj->next)
    {
      NvDsObjectMeta *obj_meta = (NvDsObjectMeta *)l_obj->data;

      for (NvDsMetaList *l_user = obj_meta->obj_user_meta_list; l_user != NULL; l_user = l_user->next)
      {
        NvDsUserMeta *user_meta = (NvDsUserMeta *)l_user->data;
        if (user_meta->base_meta.meta_type == NVDSINFER_TENSOR_OUTPUT_META)
        {
          /* convert to tensor metadata */
          NvDsInferTensorMeta *tensor_meta = (NvDsInferTensorMeta *)user_meta->user_meta_data;
          if (tensor_meta->unique_id == 2)
          {
            NvDsEmbedding embedding;
            NvDsInferDims embedding_dims = tensor_meta->output_layers_info[0].inferDims;
            int embedding_length = embedding_dims.d[0];
            embedding.embedding_length = embedding_length;
            embedding.embedding_vector = (float *)g_malloc0(embedding_length * sizeof(float));;
            cudaMemcpy(embedding.embedding_vector, (float *)(tensor_meta->out_buf_ptrs_dev[0]),
                       embedding_length * sizeof(float), cudaMemcpyDeviceToHost);
            break;
          }
        }
      }
    }
  }
  return GST_PAD_PROBE_OK;
}

However, I am having trouble accessing the tensor metadata using the Deepstream Service Maker C++ APIs. I have not found any relevant references in the Service Maker examples.
Could anyone guide me on how to achieve this?

Thank you

Since service-maker is an alpha release and still lacks some functions.

you cannot directly use the service-maker API to access it.

However, I have provided a workaround. You can refer to this gzip file.

deepstream_reid.tar.gz (3.6 KB)

1.Modify the class Metadata in the /opt/nvidia/deepstream/deepstream/service-maker/includes/metadata.hpp as follows.

class Metadata {
 public:
  /** @brief Constructor through an opqaue pinter */
  Metadata(void* data);

  /** @brief Destructor */
  virtual ~Metadata();

  /** @brief operator to check if a metadata is void */
  virtual operator bool() { return data_ != nullptr; }

  void *get() const { return data_; }
 protected:
  void* data_; //> opaque data pointer
};

2.Put the gzip file to /opt/nvidia/deepstream/deepstream/service-maker/sources/apps

tar zxvf deepstream_reid.tar.gz
  1. According to the location of your model to modify sgie_reidentificationnet_tao_config.yml and pgie_peoplenet_transformer_tao_config.yml

cd deepstream_reid/
mkdir build
cd build
cmake ..
make
./deepstream-reid /opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.h264 

You will see the results output on the terminal

I tested it on Jetson. If you use dGPU, you may need to modify the library path in CMakelist.txt.

Duplicate with Optimal Handling of Tensor Metadata in Deepstream Service Maker: BufferProbe vs DataReceiver

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.