How to convert frame image data to jpeg

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
3090ti
• DeepStream Version
6.1 triton (docker)

I am working on customizing “deepstream-3d-action-recognition”

I want to convert an image of a frame with motion recognition (for example, a frame recognized as “walk”) to base64 and send it as metadata.

Therefore, in the “pgie_src_pad_buffer_probe()” function of “deepstream_3d_action_recognition.cpp”, I want to use the code below to obtain “NvBufSurface” data and convert it to jpeg.

        GstMapInfo inmap = GST_MAP_INFO_INIT;
        if (!gst_buffer_map(buf, &inmap, GST_MAP_READ))
        {
            NVGSTDS_ERR_MSG_V("input buffer mapinfo failed");
            return;
        }

        NvBufSurface *ip_surf = (NvBufSurface*)inmap.data;
        gst_buffer_unmap(buf, &inmap);

Can you give me a method or a sample app that I can refer to?

Or let me know if there is another way to convert frame image to jpeg even if it is not the way to use “NvBufSurface”.

You may refer to sample deepstream-image-meta-test, and similiar topic How to encode multiple images using the same objectMeta inside obj_meta_list - Intelligent Video Analytics / DeepStream SDK - NVIDIA Developer Forums.
There is also sample using opencv to write jpeg file: Deepstream sample code snippet - Intelligent Video Analytics / DeepStream SDK - NVIDIA Developer Forums

In “deepstream-3d-action-recognition”, there is no object data because it recognizes an action, not an object.
The topic you recommended is using the “nvds_obj_enc_process” function. Can I use that function without object data?
What I want is frame image

Can you share the purpose of metadata ?
If you just want convert frame image to jpeg after pgie,add tee plugin to your app. such as pgie --> tee --> nvjpegenc --> filesink

I am currently sending the recognition result frame by frame after performing action recognition in the app.

Below is the corresponding code

        NvDsMetaList *l_classifier = NULL;
        for (l_classifier = roi_meta.classifier_meta_list; l_classifier != NULL;
             l_classifier = l_classifier->next)
        {
          NvDsClassifierMeta *classifier_meta = (NvDsClassifierMeta *)(l_classifier->data);
          NvDsLabelInfoList *l_label;
          for (l_label = classifier_meta->label_info_list; l_label != NULL;
               l_label = l_classifier->next)
          {
            NvDsLabelInfo *label_info = (NvDsLabelInfo *)l_label->data;

            NvDsDisplayMeta *display_meta = nvds_acquire_display_meta_from_pool(batch_meta);
            display_meta->num_labels = 1;

            NvOSD_TextParams *txt_params = &display_meta->text_params[0];
            txt_params->display_text = (char *)g_malloc0(MAX_STR_LEN);

            //[ym] image convert
            
            
            // [ym] convert timestamp
            int64_t timestamp = (int64_t)(roi_meta.frame_meta->ntp_timestamp) / 1000000000;
            time_t time = static_cast<time_t>(timestamp);
            tm* tm = gmtime(&time);
            tm->tm_hour += 9;
            char time_buf[50];
            strftime(time_buf, sizeof(time_buf), "%Y-%m-%d %H:%M:%S", tm);

            JsonBuilder *builder = json_builder_new();
            json_builder_begin_object(builder);
            
            json_builder_set_member_name(builder, "timestamp");
            json_builder_add_string_value(builder, time_buf);
            json_builder_set_member_name(builder, "source_id");
            json_builder_add_int_value(builder, roi_meta.frame_meta->source_id);
            json_builder_set_member_name(builder, "frame_num");
            json_builder_add_int_value(builder, roi_meta.frame_meta->frame_num);
            json_builder_set_member_name(builder, "result");
            json_builder_add_string_value(builder, label_info->result_label);

In addition to this, I want to convert the frame image to base64 and add it to the metadata.

As you said, if you configure the pipeline with “pgie → tee → nvjpegenc”, can the jpeg image converted from “nvjpegenc” be converted to base64 and transmitted?

This method is feasible.you can encode jpeg to base64 by yourself.

Is there any sample app that converts frame to image with that structure(pgie → tee → nvjpegenc)?
I don’t have enough skills, so I need a code that I can refer to

This example is about tee

Thanks

Refer to the sample implementation in /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-image-meta-test/deepstream_image_meta_test.c

I’ve modified the functionality to do base64 encoding. I think you can call your json builder with the base64Data.

#include <iostream>
#include <string>
#include <boost/algorithm/base64.hpp>

static GstPadProbeReturn
pgie_src_pad_buffer_probe(GstPad *pad, GstPadProbeInfo *info, gpointer ctx)
{
  GstBuffer *buf = (GstBuffer *)info->data;
  GstMapInfo inmap = GST_MAP_INFO_INIT;
  if (!gst_buffer_map(buf, &inmap, GST_MAP_READ))
  {
    GST_ERROR("input buffer mapinfo failed");
    return GST_FLOW_ERROR;
  }
  NvBufSurface *ip_surf = (NvBufSurface *)inmap.data;
  gst_buffer_unmap(buf, &inmap);

  NvDsObjectMeta *obj_meta = NULL;
  guint vehicle_count = 0;
  guint person_count = 0;
  NvDsMetaList *l_frame = NULL;
  NvDsMetaList *l_obj = NULL;
  NvDsBatchMeta *batch_meta = gst_buffer_get_nvds_batch_meta(buf);

  // iterate through all the frames and objects to do encoding
  for (l_frame = batch_meta->frame_meta_list; l_frame != NULL; l_frame = l_frame->next)
  {
    NvDsFrameMeta *frame_meta = (NvDsFrameMeta *)(l_frame->data);
    guint num_rects = 0;
    for (l_obj = frame_meta->obj_meta_list; l_obj != NULL; l_obj = l_obj->next)
    {
      obj_meta = (NvDsObjectMeta *)(l_obj->data);
      /* Conditions that user needs to set to encode the detected objects of interest */
      if ((obj_meta->class_id == PGIE_CLASS_ID_PERSON || obj_meta->class_id == PGIE_CLASS_ID_VEHICLE))
      {
        NvDsObjEncUsrArgs userData = {0};
        /* To be set by user */
        userData.saveImg = true;
        userData.attachUsrMeta = true;
        /* Set if Image scaling Required */
        userData.scaleImg = false;
        userData.scaledWidth = 0;
        userData.scaledHeight = 0;
        /* Preset */
        userData.objNum = num_rects;
        /* Quality */
        userData.quality = 80;
        /* attach the userData to obj_meta with encoded objects */
        nvds_obj_enc_process(ctx, &userData, ip_surf, obj_meta, frame_meta);
      }
    }
  }
  nvds_obj_enc_finish(ctx); // wait until all the selected objects to be encoded

  // iterate through all the frames and get the encoded usermetadata
  for (l_frame = batch_meta->frame_meta_list; l_frame != NULL; l_frame = l_frame->next)
  {
    NvDsFrameMeta *frame_meta = (NvDsFrameMeta *)(l_frame->data);
    guint num_rects = 0;
    for (l_obj = frame_meta->obj_meta_list; l_obj != NULL; l_obj = l_obj->next)
    {
      obj_meta = (NvDsObjectMeta *)(l_obj->data);

      NvDsUserMetaList *usrMetaList = obj_meta->obj_user_meta_list;
      while (usrMetaList != NULL)
      {
        NvDsUserMeta *usrMetaData = (NvDsUserMeta *)usrMetaList->data;
        if (usrMetaData->base_meta.meta_type == NVDS_CROP_IMAGE_META)
        {
          // encode the jpeg binary buffer to base64
          NvDsObjEncOutParams *enc_jpeg_image = (NvDsObjEncOutParams *)usrMetaData->user_meta_data;
          uint8_t *buffer = enc_jpeg_image->outBuffer;
          size_t bufferLength = enc_jpeg_image->outLen;
          std::string binaryData(reinterpret_cast<const char*>(buffer), bufferLength);
          std::string base64Data = boost::algorithm::base64_encode(binaryData);
        }
        else
        {
          usrMetaList = usrMetaList->next;
        }
      }
    }
  }

  return GST_PAD_PROBE_OK;
}

thank you!!!

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.