The correct way to attach a customly prepared tensor to batch meta

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU): GPU
• DeepStream Version: 6.1.1
• TensorRT Version: 8.2.3.0
• NVIDIA GPU Driver Version (valid for GPU only): 525.105.17
• Issue Type( questions, new requirements, bugs): Question

I’m trying to attach a preprepared tensor as batch meta for nvinfer to run on it. Starting from nvdspreprocess, I adapted the logic to attach preprocessed tensors to batch meta and wrote my own attach and release functions inside a custom plugin that handles a particular single source.

The problem is that the pipeline works with no issues with a single source and batchsize=1, only after adding an additional source and increasing the batchsize to 2, the output detections only appeared for a single source at a time and kept switching between the outputs of the two sources.

I’m allocating the tensor buffer like this:

cudaMalloc((void **)&tensorData,
           batch_size * width * height * num_channels * sizeof(float)); // 3 ch, 4 bytes for float input

and calling addFrameUserMeta to attach it to batch meta.

void addFrameUserMeta(NvDsFrameMeta *p_frame_meta, float *tensorData,
int batch_size) {
NvDsUserMeta *user_meta = NULL;
NvDsBatchMeta *batch_meta = NULL;

/** attach preprocess batchmeta as user meta at batch level */
GstNvDsPreProcessBatchMeta *preprocess_batchmeta =
new GstNvDsPreProcessBatchMeta;

// Create a vector to hold the ROI metadata
std::vector roi_vector;

// Create an NvDsRoiMeta structure for the ROI
NvDsRoiMeta roi_meta;
memset(&roi_meta, 0, sizeof(NvDsRoiMeta));

// Fill in the ROI metadata
roi_meta.roi.left = 0;
roi_meta.roi.top = 0;
roi_meta.roi.width = p_frame_meta->pipeline_width;
roi_meta.roi.height = p_frame_meta->pipeline_height;

roi_meta.scale_ratio_x = 1;
roi_meta.scale_ratio_y = 1;
roi_meta.offset_left = 0;
roi_meta.offset_top = 0;

roi_meta.frame_meta = p_frame_meta;

// Add the ROI metadata to the vector
roi_vector.push_back(roi_meta);

preprocess_batchmeta->roi_vector.clear();
preprocess_batchmeta->roi_vector = roi_vector;

preprocess_batchmeta->tensor_meta = new NvDsPreProcessTensorMeta;

preprocess_batchmeta->tensor_meta->gpu_id = 0;
preprocess_batchmeta->tensor_meta->private_data = nullptr;
preprocess_batchmeta->tensor_meta->raw_tensor_buffer = tensorData;
preprocess_batchmeta->tensor_meta->tensor_shape = {batch_size, 3, 640, 640};
preprocess_batchmeta->tensor_meta->buffer_size =
batch_size * 3 * 4 * 640 * 640;
preprocess_batchmeta->tensor_meta->data_type =
NvDsDataType_FP32; // NvDsDataType_FP32 NvDsDataType_UINT8
preprocess_batchmeta->tensor_meta->tensor_name = “images”;

std::vector ids = {2};
preprocess_batchmeta->target_unique_ids = ids;

preprocess_batchmeta->private_data = nullptr;

batch_meta = p_frame_meta->base_meta.batch_meta;
user_meta = nvds_acquire_user_meta_from_pool(batch_meta);

/* Set NvDsUserMeta below */
user_meta->user_meta_data = preprocess_batchmeta;

user_meta->base_meta.meta_type = (NvDsMetaType)NVDS_PREPROCESS_BATCH_META;

user_meta->base_meta.copy_func = NULL;
user_meta->base_meta.release_func = release_user_meta_at_batch_level;

user_meta->base_meta.batch_meta = batch_meta;

/* We want to add NvDsUserMeta to batch level */
nvds_add_user_meta_to_batch(batch_meta, user_meta);
}

and here is my release function:

static void release_user_meta_at_batch_level(gpointer data,
gpointer user_data) {
NvDsUserMeta *user_meta = (NvDsUserMeta *)data;
GstNvDsPreProcessBatchMeta *preprocess_batchmeta =
(GstNvDsPreProcessBatchMeta *)user_meta->user_meta_data;
if (preprocess_batchmeta->tensor_meta != nullptr) {
delete preprocess_batchmeta->tensor_meta;
}
delete preprocess_batchmeta;
}

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

We have a nvpreprocess library sample in /opt/nvidia/deepstream/deepstream/sources/gst-plugins/gst-nvdspreprocess/nvdspreprocess_lib, have you tested this sample? Can the sample work with multiple sources?

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.