DeepStream C++: Holding Cropped Image Buffers After Line Crossing via NvDsAnalytics

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) GPU A4000
• DeepStream Version 6.3
• TensorRT Version 8.5.3-1+cuda11.8
• NVIDIA GPU Driver Version (valid for GPU only) 560.35.05
• Issue Type( questions, new requirements)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description) nvds_obj_encode.h

Hi,
I’m working on a DeepStream C++ pipeline (Ubuntu 22.04, NVIDIA A4000), and I’ve hit a roadblock trying to retain cropped image buffers in memory after a line-crossing event.
I’m using the NvDsAnalytics plugin to detect line crossings. Inside the analytics probe, I’m currently calling:

nvds_obj_enc_process(enc_ctx, &objData, ip_surf, obj_meta, frame_meta);

This works perfectly for saving cropped images to disk using NvDsObjEncUsrArgs.
What I now need is to hold the cropped image buffer in memory (e.g., for further inference or streaming) without saving it to disk. I tried accessing NvDsObjEncOutParams, but it fails the metadata check:

if (usrMeta->base_meta.meta_type == NVDS_CROP_IMAGE_META) { ... }

This is expected since the analytics probe doesn’t generate this type — NVDS_CROP_IMAGE_META is only attached in the PGIE probe.—

Question:

deepstream_yolo_app.txt (38.2 KB)

How can I access or retain the cropped image buffer in memory (e.g., as cv::Mat) after a line-cross event in the analytics probe, without duplicating logic in the PGIE probe?

Any suggestions on how to get access to that crop buffer, or trigger encoding manually from the analytics probe?

please refer to /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-image-meta-test/, if frameData.attachUsrMeta is set to true, you can get the NVDS_CROP_IMAGE_META user meta from obj_meta.

Sorry for the late reply, Is this still an DeepStream issue to support? Thanks! can you get NVDS_CROP_IMAGE_META user meta after frameData.attachUsrMeta is set to true? please refer to osd_sink_pad_buffer_probe for how to access NVDS_CROP_IMAGE_META user meta, which includes the cropped image buffers.

When I use osd_sink_pad_buffer_probe, I access NVDS_CROP_IMAGE_META, but I want to access NVDS_CROP_IMAGE_META in nvdsanalytics_src_pad_buffer_probe from nvdsanalytics pad, not osd pad.Please check my code

In nvdsanalytics_src_pad_buffer_probe, you can call nvds_obj_enc_process and nvds_obj_enc_finish when iterating over NvDFrameMeta, then you can iterate over NvDFrameMeta again to get NVDS_CROP_IMAGE_META user meta in nvdsanalytics_src_pad_buffer_probe. You don’t need to get NVDS_CROP_IMAGE_META user meta in osd_sink_pad_buffer_probe.

my use case is when lines are crossed, then save the image and load the image buffer, so I want both operations to require saving the image and buffering the image to send the secondary inference script without any delay. Below I am trying this, but not all images are saved when lines are crossed.

static GstPadProbeReturn nvdsanalytics_src_pad_buffer_probe(GstPad *pad, GstPadProbeInfo *info, gpointer u_data) {
GstBuffer *buf = (GstBuffer *)info->data;
GstMapInfo inmap = GST_MAP_INFO_INIT;
NvDsObjEncCtxHandle enc_ctx = static_cast(u_data);

if (!gst_buffer_map(buf, &inmap, GST_MAP_READ)) {
    GST_ERROR("Failed to map GstBuffer.");
    return GST_PAD_PROBE_OK;
}

NvBufSurface *ip_surf = (NvBufSurface *)inmap.data;
gst_buffer_unmap(buf, &inmap);

NvDsBatchMeta *batch_meta = gst_buffer_get_nvds_batch_meta(buf);
if (!batch_meta) return GST_PAD_PROBE_OK;

const gchar *calc_enc_str = g_getenv("CALCULATE_ENCODE_TIME");
gboolean calc_enc = !g_strcmp0(calc_enc_str, "yes");
const char *sensor_id = "sensor_0";

for (NvDsMetaList *l_frame = batch_meta->frame_meta_list; l_frame != NULL; l_frame = l_frame->next) {
    NvDsFrameMeta *frame_meta = (NvDsFrameMeta *)l_frame->data;
    int source_id = frame_meta->source_id;
    std::string directory = "save_img/" + std::to_string(source_id);
    ensure_directory(directory);

    for (NvDsMetaList *l_obj = frame_meta->obj_meta_list; l_obj != NULL; l_obj = l_obj->next) {
        NvDsObjectMeta *obj_meta = (NvDsObjectMeta *)l_obj->data;
        bool crossed_line = false;

        // STEP 1: Check if line crossing occurred
        for (NvDsMetaList *l_user = obj_meta->obj_user_meta_list; l_user != NULL; l_user = l_user->next) {
            NvDsUserMeta *user_meta = (NvDsUserMeta *)l_user->data;
            
            // Check for analytics metadata (line crossing detection)
            if (user_meta->base_meta.meta_type == 
                nvds_get_user_meta_type((gchar *)"NVIDIA.DSANALYTICSOBJ.USER_META")) {
                
                NvDsAnalyticsObjInfo *analytics = (NvDsAnalyticsObjInfo *)user_meta->user_meta_data;
                
                // Check for line crossing
                if (!analytics->lcStatus.empty()) {
                    crossed_line = true;
                    std::cout << "🔍 Line crossing detected for object ID: " << obj_meta->object_id << std::endl;
                    
                    // Print line crossing details
                    
                    break; // Exit loop once line crossing is found
                }
            }
        }

        // STEP 2: If line crossing detected, trigger object encoding
        if (crossed_line) {
            const char *class_name = (obj_meta->class_id < class_labels.size()) ? 
                                    class_labels[obj_meta->class_id].c_str() : "object";
            
            std::cout << "🎯 Triggering object encoding for line crossing: " << class_name << std::endl;
            
            // Prepare encoding parameters
            NvDsObjEncUsrArgs objData = {0};
            objData.saveImg = FALSE;  // Don't save to file automatically
            objData.attachUsrMeta = TRUE;  // Attach metadata so we can access NvDsObjEncOutParams
            objData.scaleImg = FALSE;
            objData.scaledWidth = 0;
            objData.scaledHeight = 0;
            objData.objNum = 1;
            objData.quality = 90;
            
            

            // STEP 3: Call nvds_obj_enc_process to generate encoded data
            nvds_obj_enc_process(enc_ctx, &objData, ip_surf, obj_meta, frame_meta);
            
            std::cout << "✅ Object encoding process called for line crossing event" << std::endl;
        }

        // STEP 4: Now check for the generated NvDsObjEncOutParams after encoding
        if (crossed_line) {
            // Look for the newly created CROP_IMAGE_META
            for (NvDsMetaList *l_user = obj_meta->obj_user_meta_list; l_user != NULL; l_user = l_user->next) {
                NvDsUserMeta *user_meta = (NvDsUserMeta *)l_user->data;
                
                if (user_meta->base_meta.meta_type == NVDS_CROP_IMAGE_META) {
                    // Access the encoded data
                    NvDsObjEncOutParams *enc_jpeg_image = (NvDsObjEncOutParams *)user_meta->user_meta_data;
                    
                    const char *class_name = (obj_meta->class_id < class_labels.size()) ? 
                                            class_labels[obj_meta->class_id].c_str() : "object";
                    
                    // Create filename for the line crossing triggered crop
                    char fileObjNameString[FILE_NAME_SIZE];
                    snprintf(fileObjNameString, FILE_NAME_SIZE, "%s/LC_%s_%d_%lu_%s.jpg",
                            directory.c_str(), class_name, frame_number, 
                            obj_meta->object_id, class_name);
                    
                    // Save the encoded buffer to file
                    FILE *file = fopen(fileObjNameString, "wb");
                    if (file) {
                        fwrite(enc_jpeg_image->outBuffer, sizeof(uint8_t), enc_jpeg_image->outLen, file);
                        fclose(file);
                        std::cout << "✅ Saved line crossing crop: " << fileObjNameString 
                                  << " (size: " << enc_jpeg_image->outLen << " bytes)" << std::endl;
                    } else {
                        std::cerr << "❌ Failed to open file: " << fileObjNameString << std::endl;
                    }
                    
                    // You can also process the buffer in memory here
                    // For example: send over network, analyze, etc.
                    process_encoded_buffer(enc_jpeg_image->outBuffer, enc_jpeg_image->outLen, class_name);
                    
                    break; // Exit after finding the crop image
                }
            }
        }

        // STEP 5: Generate Kafka event message for line crossing
        if (crossed_line && obj_meta->confidence > 0.5) {
            NvDsEventMsgMeta *msg_meta = (NvDsEventMsgMeta *)g_malloc0(sizeof(NvDsEventMsgMeta));
            msg_meta->type = NVDS_EVENT_CUSTOM;
            msg_meta->objType = NVDS_OBJECT_TYPE_CUSTOM;
            msg_meta->bbox.left = obj_meta->rect_params.left;
            msg_meta->bbox.top = obj_meta->rect_params.top;
            msg_meta->bbox.width = obj_meta->rect_params.width;
            msg_meta->bbox.height = obj_meta->rect_params.height;
            msg_meta->frameId = frame_number;
            msg_meta->trackingId = obj_meta->object_id;
            msg_meta->confidence = obj_meta->confidence;
            msg_meta->ts = (gchar *)g_malloc0(MAX_TIME_STAMP_LEN + 1);
            msg_meta->sensorStr = (gchar *)g_malloc0(MAX_SENSOR_STR_LEN);
            msg_meta->objectId = (gchar *)g_malloc0(MAX_LABEL_SIZE);
            msg_meta->videoPath = g_strdup_printf("%d", source_id);

            g_strlcpy(msg_meta->sensorStr, sensor_id, MAX_SENSOR_STR_LEN);
            
            const char *class_name = (obj_meta->class_id < class_labels.size()) ? 
                                    class_labels[obj_meta->class_id].c_str() : 
                                    pgie_classes_str[obj_meta->class_id].c_str();
            
            generate_ts_rfc3339(msg_meta->ts, MAX_TIME_STAMP_LEN);
            
            std::cout << "📍 Line crossing event for " << class_name 
                      << " (ID: " << obj_meta->object_id << ")" << std::endl;

            NvDsUserMeta *user_event_meta = nvds_acquire_user_meta_from_pool(batch_meta);
            if (user_event_meta) {
                user_event_meta->user_meta_data = (void *)msg_meta;
                user_event_meta->base_meta.meta_type = NVDS_EVENT_MSG_META;
                user_event_meta->base_meta.copy_func = (NvDsMetaCopyFunc)meta_copy_func;
                user_event_meta->base_meta.release_func = (NvDsMetaReleaseFunc)meta_free_func;
                nvds_add_user_meta_to_frame(frame_meta, user_event_meta);
                
                std::cout << "🚀 Sent line crossing metadata to Kafka: " << class_name 
                          << " (conf: " << obj_meta->confidence << ")" << std::endl;
            } else {
                g_print("Error in attaching event meta to buffer\n");
                if (msg_meta->ts) g_free(msg_meta->ts);
                if (msg_meta->sensorStr) g_free(msg_meta->sensorStr);
                if (msg_meta->objectId) g_free(msg_meta->objectId);
                g_free(msg_meta);
            }
        }
    }
}

nvds_obj_enc_finish(enc_ctx);
frame_number++;
return GST_PAD_PROBE_OK;

}

please refer to my last comment. you can iterate over NvDFrameMeta twice. save crop buffer at the first time. get crop buffer at the second time. Here are some codes.
nvdsanalytics_src_pad_buffer_probe{
for (NvDsMetaList *l_obj = frame_meta->obj_meta_list; l_obj != NULL; l_obj = l_obj->next) {

// Check for line crossing
nvds_obj_enc_process

}
nvds_obj_enc_finish

for (NvDsMetaList *l_obj = frame_meta->obj_meta_list; l_obj != NULL; l_obj = l_obj->next) {

if (user_meta->base_meta.meta_type == NVDS_CROP_IMAGE_META) {

}

}

}

Thank you for the confirmation, yes, that’s exactly the approach I was looking for. issue has been resolve

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.