Output of engine in gstnvinfer_meta_utlis.cpp

Please provide complete information as applicable to your setup.

**• Hardware Platform (Jetson / GPU)GPU
**• DeepStream Version 5.1
• JetPack Version (valid for Jetson only)
**• TensorRT Version 7.1.2
**• NVIDIA GPU Driver Version (valid for GPU only) 11.1
**• Issue Type( questions, new requirements, bugs) question
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Dear professor:

I hope to get the last layer output of an  engine. I checked the sample app such as: "deepstream_infer_tensor_meta_test.cpp", "deepstream_user_metadata_app.c" and "gstnvifer.cpp". 

Especially, in the plugins, the code “gstnvinfer_meta_utils.cpp”, I can find the meta, and the function of this .cpp is given in the DOCS. However, I am hard to read the meta directly. I use the carcolor classification. So I hope I can print the engine output as the label id.

The attachment is the code I find. I hope use “count<< data<< endl”. I tried to find the define of the meta. but I can not find in the DOCS.

Please Help me.

/* Attaches the raw tensor output to the GstBuffer as metadata. */
void
attach_tensor_output_meta (GstNvInfer *nvinfer, GstMiniObject * tensor_out_object,
GstNvInferBatch *batch, NvDsInferContextBatchOutput *batch_output)
{
NvDsBatchMeta batch_meta = (nvinfer->process_full_frame) ?
batch->frames[0].frame_meta->base_meta.batch_meta :
batch->frames[0].obj_meta->base_meta.batch_meta;
/
Create and attach NvDsInferTensorMeta for each frame/object. Also

  • increment the refcount of GstNvInferTensorOutputObject. */
    for (size_t j = 0; j < batch->frames.size(); j++) {
    GstNvInferFrame &frame = batch->frames[j];
    NvDsInferTensorMeta *meta = new NvDsInferTensorMeta;
    meta->unique_id = nvinfer->unique_id;
    meta->num_output_layers = nvinfer->output_layers_info->size ();
    meta->output_layers_info = nvinfer->output_layers_info->data ();
    meta->out_buf_ptrs_host = new void *[meta->num_output_layers];
    meta->out_buf_ptrs_dev = new void *[meta->num_output_layers];
    meta->gpu_id = nvinfer->gpu_id;
    meta->priv_data = gst_mini_object_ref (tensor_out_object);
    meta->network_info = nvinfer->network_info;
for (unsigned int i = 0; i < meta->num_output_layers; i++)
{
  NvDsInferLayerInfo & info = meta->output_layers_info[i];
  meta->out_buf_ptrs_dev[i] =
      (uint8_t *) batch_output->outputDeviceBuffers[i] +
      info.inferDims.numElements * get_element_size (info.dataType) * j;
  meta->out_buf_ptrs_host[i] =
      (uint8_t *) batch_output->hostBuffers[info.bindingIndex] +
      info.inferDims.numElements * get_element_size (info.dataType) * j;
}

NvDsUserMeta *user_meta = nvds_acquire_user_meta_from_pool (batch_meta);
user_meta->user_meta_data = meta;
user_meta->base_meta.meta_type = (NvDsMetaType) NVDSINFER_TENSOR_OUTPUT_META;
user_meta->base_meta.release_func = release_tensor_output_meta;
user_meta->base_meta.copy_func = nullptr;
user_meta->base_meta.batch_meta = batch_meta;
if (nvinfer->process_full_frame)
{
  nvds_add_user_meta_to_frame (frame.frame_meta, user_meta);
} else
{
  nvds_add_user_meta_to_obj (frame.obj_meta, user_meta);
}

}
}

Please refer to /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-infer-tensor-meta-test, the tensor output can be access in the application level. You can find how to get and parse NvDsInferTensorMeta in sgie_pad_buffer_probe() function.

NvDsInferTensorMeta is defined in /opt/nvidia/deepstream/deepstream/sources/includes/gstnvdsinfer.h

There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

1 Like

Thank you very much.

Thank you very much

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.