Calling nvds_obj_enc_finish SIGSEGV

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) Xavier NX
• DeepStream Version 6.0
• JetPack Version (valid for Jetson only) 4.6
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs) Question/Bug?
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Hi Deepstream community.

I’m trying to capture a frame and encode a jpeg image when an object is detected. I’ve followed several examples, this one in particular:

on the section “If you would like to save full frames, you can check” → RTSP camera access frame issue.
I’ve successfully converted the source surface from RGBA to NV12 to pass into
nvds_obj_enc_process(…)
which executes fine stepping through gdb. I’m hoping to be able to use the nvds_obj_enc_* functions instead of OpenCV shown in the example.

The next line of code executed is
nvds_obj_enc_finish(…)
which generates a SIGSEGV with the top of the stack trace showing the offending function as:
nvds_acquire_user_meta_from_pool().

My question is nvds_acquire_user_meta_from_pool() requires a NvDsBatchMeta pointer.
Since there is no NvDsBatchMeta pointer passed into nvds_obj_enc_process(…) or nvds_obj_enc_create_context(), where is nvds_obj_enc_finish(…) getting the NvDsBatchMeta pointer from?

Hope this is sufficient information to get to an answer.
Thanks,

  • Doug

The version you refered is old. You can refer our source code deepstream_image_meta_test.c.

pgie_src_pad_buffer_probe

Thank you for the reply @yuweiw . I am familiar and have referred to deepstream_image_meta_test.c → pgie_src_pad_buffer_probe.
It’s not clear from that example how to encode the whole frame as a jpeg image and not the bounding boxes.

I’ve seen others ask how parameters to nvds_obj_enc_process (ctx, &userData, ip_surf, obj_meta, frame_meta); should be populated for encoding the entire frame. Specifically, userData and obj_meta.?.?

Could you please provide some insight or example on how userData and obj_meta should be populated to jpeg encode the entire frame?

I should also mention that I’m not saving the encoded jpeg image to file, but using the jpeg buffer for use elsewhere in the program.
I’m assuming userData.attachUsrMeta needs to be TRUE and userData.saveImg needs to be FALSE. Please confirm.

Thank you,

  • Doug

You can try to set the obj_meta->rect_params paras by yourself. If you set the width = video width, height = video height, top = 0, left = 0. It will save the whole picture.

Thanks for the suggestion yuweiw. I have those params already set. Here’s the code for the function called from osd_sink_pad_buffer_probe(…). Much is taken from existing examples and I’ve left commented code so you can see what I’ve tried:

static void
cvt_nvbufsurf_rgba_to_nv12(GstBuffer *buf, NvDsFrameMeta *frame_meta, NvDsBatchMeta *batch_meta)
{
    GstMapInfo in_map_info;

    memset (&in_map_info, 0, sizeof (in_map_info));
    if (!gst_buffer_map (buf, &in_map_info, GST_MAP_READ)) {
        g_print ("Error: Failed to map gst buffer\n");
        gst_buffer_unmap (buf, &in_map_info);
        return; //  GST_PAD_PROBE_OK;
    }

    cudaError_t cuda_err;
    NvBufSurfTransformRect src_rect, dst_rect;
    NvBufSurface *surface = NULL;

    NvDsUserMeta *user_meta = nvds_acquire_user_meta_from_pool(batch_meta);
    surface = (NvBufSurface *) in_map_info.data;  
  
    int batch_size= surface->batchSize;
    printf("\nBatch Size : %d, resolution : %dx%d \n",batch_size,
        surface->surfaceList[0].width, surface->surfaceList[0].height);

    src_rect.top   = 0;
    src_rect.left  = 0;
    src_rect.width = (guint) surface->surfaceList[0].width;
    src_rect.height= (guint) surface->surfaceList[0].height;

    dst_rect.top   = 0;
    dst_rect.left  = 0;
    dst_rect.width = (guint) surface->surfaceList[0].width;
    dst_rect.height= (guint) surface->surfaceList[0].height;

    NvBufSurfTransformParams nvbufsurface_params;
    nvbufsurface_params.src_rect = &src_rect;
    nvbufsurface_params.dst_rect = &dst_rect;
    nvbufsurface_params.transform_flag =  NVBUFSURF_TRANSFORM_CROP_SRC | NVBUFSURF_TRANSFORM_CROP_DST;
    nvbufsurface_params.transform_filter = NvBufSurfTransformInter_Default;
  
    NvBufSurface *dst_surface = NULL;
    NvBufSurfaceCreateParams nvbufsurface_create_params;

    /* An intermediate buffer for NV12/RGBA to BGR conversion  will be
     * required. Can be skipped if custom algorithm can work directly on NV12/RGBA. */
    nvbufsurface_create_params.gpuId  = surface->gpuId;
    nvbufsurface_create_params.width  = (gint) surface->surfaceList[0].width;
    nvbufsurface_create_params.height = (gint) surface->surfaceList[0].height;
    nvbufsurface_create_params.size = 0;
    nvbufsurface_create_params.colorFormat = NVBUF_COLOR_FORMAT_NV12; // NVBUF_COLOR_FORMAT_RGBA;
    nvbufsurface_create_params.layout = NVBUF_LAYOUT_PITCH;
    nvbufsurface_create_params.memType = NVBUF_MEM_DEFAULT;

    cuda_err = cudaSetDevice (surface->gpuId);

    cudaStream_t cuda_stream;

    cuda_err=cudaStreamCreate (&cuda_stream);

    int create_result = NvBufSurfaceCreate(&dst_surface, batch_size, &nvbufsurface_create_params);	

    NvBufSurfTransformConfigParams transform_config_params;
    NvBufSurfTransform_Error err;

    transform_config_params.compute_mode = NvBufSurfTransformCompute_Default;
    transform_config_params.gpu_id = surface->gpuId;
    transform_config_params.cuda_stream = cuda_stream;
    err = NvBufSurfTransformSetSessionParams (&transform_config_params);

    NvBufSurfaceMemSet (dst_surface, 0, 0, 0);
    err = NvBufSurfTransform (surface, dst_surface, &nvbufsurface_params);
    if (err != NvBufSurfTransformError_Success) {
  	  g_print ("NvBufSurfTransform failed with error %d while converting buffer\n", err);
    }
    NvBufSurfaceMap (dst_surface, 0, 0, NVBUF_MAP_READ);
    // NvBufSurfaceSyncForCpu(dst_surface, 0, 0);
    NvDsObjectMeta obj_meta;
    NvDsObjEncUsrArgs userData = { 0 };

    userData.saveImg = FALSE;
    // strcpy(userData.fileNameImg, "/tmp/junk.jpg");
    userData.attachUsrMeta = TRUE;
    userData.quality = 80;
    userData.scaleImg     = FALSE;
    userData.scaledWidth  = surface->surfaceList[0].width;
    userData.scaledHeight = surface->surfaceList[0].height;
    userData.objNum = 1; // num_rects;

    memset(&obj_meta, 0, sizeof(obj_meta));
    obj_meta.rect_params.width  = surface->surfaceList[0].width;
    obj_meta.rect_params.height = surface->surfaceList[0].height;
    obj_meta.rect_params.top  = 0;
    obj_meta.rect_params.left = 0;
    obj_meta.base_meta.meta_type = NVDS_CROP_IMAGE_META;

    NvDsUserMetaList usrMetaList = { 0 };
    // NvDsUserMeta usrMetaData = { 0 } ;
    NvDsObjEncOutParams enc_jpeg_image;

    memset(&enc_jpeg_image, 0, sizeof(enc_jpeg_image));

    // usrMetaData.base_meta.meta_type = NVDS_CROP_IMAGE_META;
    user_meta->base_meta.meta_type = NVDS_CROP_IMAGE_META;
    user_meta->base_meta.copy_func = (NvDsMetaCopyFunc)copy_user_meta;
    user_meta->base_meta.release_func = (NvDsMetaReleaseFunc)release_user_meta;
    user_meta->user_meta_data = (NvDsObjEncOutParams *)&enc_jpeg_image;

    obj_meta.obj_user_meta_list = &usrMetaList;
    usrMetaList.data = user_meta; // &usrMetaData;
    // usrMetaData.base_meta.copy_func = (NvDsMetaCopyFunc)copy_user_meta;
    // usrMetaData.base_meta.release_func = (NvDsMetaReleaseFunc)release_user_meta;
    // nvds_add_user_meta_to_obj(&obj_meta, &usrMetaData); // SIGSEGV probably assert from locked mutex.
    // nvds_add_user_meta_to_batch(batch_meta, &usrMetaData); // SIGSEGV probably assert from locked mutex.
    // nvds_add_user_meta_to_batch(batch_meta, user_meta); // SIGSEGV probably assert from locked mutex.
    // nvds_add_user_meta_to_frame(frame_meta, &usrMetaData);
    nvds_add_user_meta_to_frame(frame_meta, user_meta);
    // nvds_add_user_meta_to_obj(&obj_meta,  user_meta);

    if (ctx == NULL) ctx = nvds_obj_enc_create_context();
    nvds_obj_enc_process((NvDsObjEncCtxHandle)ctx, &userData, dst_surface, &obj_meta, frame_meta);

    //   *** SIGSEGV happens when this function is called:
    nvds_obj_enc_finish((NvDsObjEncCtxHandle)ctx);
    //  in nvds_acquire_user_meta_from_pool()

    // // NvDsUserMetaList *usrMetaList = obj_meta->obj_user_meta_list;
    // for (NvDsUserMetaList *usrMetaList_p = &usrMetaList; usrMetaList_p != NULL; usrMetaList_p = usrMetaList_p->next)
    // {
        // g_print(" Got usrMetaList after JPEG encoding.\n");
        // NvDsUserMeta *usrMetaData_p = (NvDsUserMeta *) usrMetaList_p->data;
        // if (usrMetaData.base_meta.meta_type == NVDS_CROP_IMAGE_META)
        // {
            // NvDsObjEncOutParams *enc_jpeg_image =
                // (NvDsObjEncOutParams *) usrMetaData.user_meta_data;
            // enc_jpeg_image->outBuffer << -- Encoded JPEG image
            // enc_jpeg_image->outLen    << -- Encoded JPEG image length
            g_print(" usrMetaList after JPEG encoding  HAS AN IMAGE!!! %d bytes.\n", enc_jpeg_image.outLen);
            // break;
        // }
    // }

    NvBufSurfaceUnMap(dst_surface, 0, 0);
    NvBufSurfaceDestroy(dst_surface);
    cudaStreamDestroy(cuda_stream);
    gst_buffer_unmap(buf, &in_map_info);
}

Here’s the stack trace from the gdb session:

(gdb) 
2022-11-16 13:49:11.126[20273] update_dictionary_value():# 1088 WARN - Existing dictionary type is different than this type. 0 - 9.
249	    nvds_add_user_meta_to_frame(frame_meta, user_meta);
(gdb) 
2022-11-16 13:49:11.438[20273] stream_monitor_thread_fn():# 1284 INFO - \/\/\/  Max stalled time for all cams: 0 / 0.
252	    if (ctx == NULL) ctx = nvds_obj_enc_create_context();
(gdb) 
253	    nvds_obj_enc_process((NvDsObjEncCtxHandle)ctx, &userData, dst_surface, &obj_meta, frame_meta);
(gdb) 
[New Thread 0x728bffe6b0 (LWP 20324)]
254	    nvds_obj_enc_finish((NvDsObjEncCtxHandle)ctx);
(gdb) 

Thread 49 "pool" received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x728bffe6b0 (LWP 20324)]
0x0000007fb659dc70 in nvds_acquire_user_meta_from_pool () from /opt/nvidia/deepstream/deepstream/lib/libnvds_meta.so
(gdb) where
#0  0x0000007fb659dc70 in nvds_acquire_user_meta_from_pool () at /opt/nvidia/deepstream/deepstream/lib/libnvds_meta.so
#1  0x0000007fb45cf154 in  () at /opt/nvidia/deepstream/deepstream/lib/libnvds_batch_jpegenc.so
#2  0x0000007fb781a558 in  () at /usr/lib/aarch64-linux-gnu/libglib-2.0.so.0
#3  0x0000007fb78afe80 in  () at /usr/lib/aarch64-linux-gnu/libglib-2.0.so.0
(gdb) 

Going back to my original question, where is nvds_obj_enc_finish() getting the NvDsBatchMeta object pointer param from?

Thanks,

  • Doug

The nvds_obj_enc_finish don’t care about the NvDsBatchMeta object point. It’s just used as sync signal. You can provide us a simple demo code to repo your crash problem. Then we can debug it. Thanks

Thanks @yuweiw.
simple demo would be nice, but here’s code based on a modified version of deepstream-image-meta-test which also uses ds_image_meta_pgie_config.txt from that example:

#include <stdio.h>
#include <gst/gst.h>
#include <glib.h>
#include <stdlib.h>
#include <string.h>
#include <gmodule.h>
#include <math.h>

#include <cuda_runtime_api.h>
#include "gstnvdsmeta.h"
#include "gst-nvmessage.h"
#include "nvdsmeta.h"

#include "nvbufsurface.h"
#include "nvbufsurftransform.h"
#include "nvds_obj_encode.h"

#define MAX_DISPLAY_LEN 64

#define PGIE_CLASS_ID_VEHICLE 0
#define PGIE_CLASS_ID_PERSON 2

#define SGIE_CLASS_ID_LPD 0

#define PGIE_CLASS_ID_VEHICLE 0
#define PGIE_CLASS_ID_PERSON 2

#define PRIMARY_DETECTOR_UID 1
#define SECONDARY_DETECTOR_UID 2
#define SECONDARY_CLASSIFIER_UID 3

/** set the user metadata type */
#define NVDS_USER_FRAME_META_EXAMPLE (nvds_get_user_meta_type("NVIDIA.NVINFER.USER_META"))

#define USER_ARRAY_SIZE 16

/* The muxer output resolution must be set if the input streams will be of
 * different resolution. The muxer will scale all the input frames to this
 * resolution. */
#define MUXER_OUTPUT_WIDTH 1920
#define MUXER_OUTPUT_HEIGHT 1080

/* Muxer batch formation timeout, for e.g. 40 millisec. Should ideally be set
 * based on the fastest source's framerate. */
#define MUXER_BATCH_TIMEOUT_USEC 40000

#define TILED_OUTPUT_WIDTH 1920
#define TILED_OUTPUT_HEIGHT 1080

/* NVIDIA Decoder source pad memory feature. This feature signifies that source
 * pads having this capability will push GstBuffers containing cuda buffers. */
#define GST_CAPS_FEATURES_NVMM "memory:NVMM"

gint frame_number = 0;
gchar pgie_classes_str[4][32] = { "Vehicle", "TwoWheeler", "Person",
  "Roadsign"
};

#define MAX_META_BUFFS 128
typedef enum {
    OBJECT_NONE = 0,
    OBJECT_VHICLE = 0x1,
    OBJECT_PERSON = 0x2
} object_type;

typedef struct _deepstream_meta_object {
    guint cameranumber;
    guint dtop;
    guint dleft;
    object_type objecttype;
    guint confidence;
    char lc_string[MAX_LABEL_SIZE];
} deepstream_meta_object;

typedef struct _deepstream_meta_output {
    char cpuid[32];
    guint num_objects;
    guint person_count;
    guint vehicle_count;
    guint plate_count;
    guint extracted_plate_count;
    GstClockTime frame_time;
    deepstream_meta_object object[MAX_ELEMENTS_IN_DISPLAY_META];
} deepstream_meta_output;

deepstream_meta_output meta_buff[MAX_META_BUFFS];
int deep_meta_buff_head = 0;
int deep_meta_buff_tail = 0;

static NvDsObjEncCtxHandle ctx;

static gpointer
copy_user_meta(gpointer data, gpointer user_data)
{
    NvDsUserMeta *user_meta = (NvDsUserMeta *)data;
    NvDsObjEncOutParams *enc_jpeg_image = (NvDsObjEncOutParams *) user_meta->user_meta_data;
    NvDsObjEncOutParams *dst_enc_jpeg_image = (NvDsObjEncOutParams *)g_malloc0(sizeof(NvDsObjEncOutParams));

    if (dst_enc_jpeg_image) memcpy(dst_enc_jpeg_image, enc_jpeg_image, sizeof(NvDsObjEncOutParams));

    return (gpointer)dst_enc_jpeg_image;
}

static void
release_user_meta(gpointer data, gpointer user_data)
{
    NvDsUserMeta *user_meta = (NvDsUserMeta *)data;
    if(user_meta->user_meta_data)
    {
        g_free(user_meta->user_meta_data);
        user_meta->user_meta_data = NULL;
    }
}

static void
cvt_nvbufsurf_rgba_to_nv12(GstBuffer *buf, NvDsFrameMeta *frame_meta, NvDsBatchMeta *batch_meta)
{
    GstMapInfo in_map_info;

    memset (&in_map_info, 0, sizeof (in_map_info));
    if (!gst_buffer_map (buf, &in_map_info, GST_MAP_READ)) {
        g_print ("Error: Failed to map gst buffer\n");
        gst_buffer_unmap (buf, &in_map_info);
        return; //  GST_PAD_PROBE_OK;
    }

    cudaError_t cuda_err;
    NvBufSurfTransformRect src_rect, dst_rect;
    NvBufSurface *surface = NULL;

    NvDsUserMeta *user_meta = nvds_acquire_user_meta_from_pool(batch_meta);
    surface = (NvBufSurface *) in_map_info.data;  
  
    int batch_size= surface->batchSize;
    printf("\ncvt_nvbufsurf_rgba_to_nv12() called.    Batch Size : %d, resolution : %dx%d \n",batch_size,
        surface->surfaceList[0].width, surface->surfaceList[0].height);

    src_rect.top   = 0;
    src_rect.left  = 0;
    src_rect.width = (guint) surface->surfaceList[0].width;
    src_rect.height= (guint) surface->surfaceList[0].height;

    dst_rect.top   = 0;
    dst_rect.left  = 0;
    dst_rect.width = (guint) surface->surfaceList[0].width;
    dst_rect.height= (guint) surface->surfaceList[0].height;

    NvBufSurfTransformParams nvbufsurface_params;
    nvbufsurface_params.src_rect = &src_rect;
    nvbufsurface_params.dst_rect = &dst_rect;
    nvbufsurface_params.transform_flag =  NVBUFSURF_TRANSFORM_CROP_SRC | NVBUFSURF_TRANSFORM_CROP_DST;
    nvbufsurface_params.transform_filter = NvBufSurfTransformInter_Default;
  
    NvBufSurface *dst_surface = NULL;
    NvBufSurfaceCreateParams nvbufsurface_create_params;

    /* An intermediate buffer for NV12/RGBA to BGR conversion  will be
     * required. Can be skipped if custom algorithm can work directly on NV12/RGBA. */
    nvbufsurface_create_params.gpuId  = surface->gpuId;
    nvbufsurface_create_params.width  = (gint) surface->surfaceList[0].width;
    nvbufsurface_create_params.height = (gint) surface->surfaceList[0].height;
    nvbufsurface_create_params.size = 0;
    nvbufsurface_create_params.colorFormat = NVBUF_COLOR_FORMAT_NV12; // NVBUF_COLOR_FORMAT_RGBA;
    nvbufsurface_create_params.layout = NVBUF_LAYOUT_PITCH;
    nvbufsurface_create_params.memType = NVBUF_MEM_DEFAULT;

    cuda_err = cudaSetDevice (surface->gpuId);

    cudaStream_t cuda_stream;

    cuda_err=cudaStreamCreate (&cuda_stream);

    int create_result = NvBufSurfaceCreate(&dst_surface, batch_size, &nvbufsurface_create_params);	

    NvBufSurfTransformConfigParams transform_config_params;
    NvBufSurfTransform_Error err;

    transform_config_params.compute_mode = NvBufSurfTransformCompute_Default;
    transform_config_params.gpu_id = surface->gpuId;
    transform_config_params.cuda_stream = cuda_stream;
    err = NvBufSurfTransformSetSessionParams (&transform_config_params);

    NvBufSurfaceMemSet (dst_surface, 0, 0, 0);
    err = NvBufSurfTransform (surface, dst_surface, &nvbufsurface_params);
    if (err != NvBufSurfTransformError_Success) {
  	  g_print ("NvBufSurfTransform failed with error %d while converting buffer\n", err);
    }
    NvBufSurfaceMap (dst_surface, 0, 0, NVBUF_MAP_READ);
    // NvBufSurfaceSyncForCpu(dst_surface, 0, 0);
    NvDsObjectMeta obj_meta;
    NvDsObjEncUsrArgs userData = { 0 };

    userData.saveImg = FALSE;
    // strcpy(userData.fileNameImg, "/tmp/junk.jpg");
    userData.attachUsrMeta = TRUE;
    userData.quality = 80;
    userData.scaleImg     = FALSE;
    userData.scaledWidth  = surface->surfaceList[0].width;
    userData.scaledHeight = surface->surfaceList[0].height;
    userData.objNum = 1; // num_rects;

    memset(&obj_meta, 0, sizeof(obj_meta));
    obj_meta.rect_params.width  = surface->surfaceList[0].width;
    obj_meta.rect_params.height = surface->surfaceList[0].height;
    obj_meta.rect_params.top  = 0;
    obj_meta.rect_params.left = 0;
    obj_meta.base_meta.meta_type = NVDS_CROP_IMAGE_META;

    NvDsUserMetaList usrMetaList = { 0 };
    // NvDsUserMeta usrMetaData = { 0 } ;
    NvDsObjEncOutParams enc_jpeg_image;

    memset(&enc_jpeg_image, 0, sizeof(enc_jpeg_image));

    // usrMetaData.base_meta.meta_type = NVDS_CROP_IMAGE_META;
    user_meta->base_meta.meta_type = NVDS_CROP_IMAGE_META;
    user_meta->base_meta.copy_func = (NvDsMetaCopyFunc)copy_user_meta;
    user_meta->base_meta.release_func = (NvDsMetaReleaseFunc)release_user_meta;
    user_meta->user_meta_data = (NvDsObjEncOutParams *)&enc_jpeg_image;

    obj_meta.obj_user_meta_list = &usrMetaList;
    usrMetaList.data = user_meta; // &usrMetaData;
    // usrMetaData.base_meta.copy_func = (NvDsMetaCopyFunc)copy_user_meta;
    // usrMetaData.base_meta.release_func = (NvDsMetaReleaseFunc)release_user_meta;
    // nvds_add_user_meta_to_obj(&obj_meta, &usrMetaData); // SIGSEGV probably assert from locked mutex.
    // nvds_add_user_meta_to_batch(batch_meta, &usrMetaData); // SIGSEGV probably assert from locked mutex.
    // nvds_add_user_meta_to_batch(batch_meta, user_meta); // SIGSEGV probably assert from locked mutex.
    // nvds_add_user_meta_to_frame(frame_meta, &usrMetaData);
    nvds_add_user_meta_to_frame(frame_meta, user_meta);
    // nvds_add_user_meta_to_obj(&obj_meta,  user_meta);

    g_print("cvt_nvbufsurf_rgba_to_nv12: Converted surface to NV12 color format.  Encoding JPEG image.\n");
    if (ctx == NULL) ctx = nvds_obj_enc_create_context();
    g_print("cvt_nvbufsurf_rgba_to_nv12:   Created JPEG Context for Encoding JPEG image.\n");
    nvds_obj_enc_process((NvDsObjEncCtxHandle)ctx, &userData, dst_surface, &obj_meta, frame_meta);
    g_print("cvt_nvbufsurf_rgba_to_nv12:   Finalizing Encoding JPEG image.\n");
    nvds_obj_enc_finish((NvDsObjEncCtxHandle)ctx);
    g_print("cvt_nvbufsurf_rgba_to_nv12:   Encoding JPEG image completed.\n");

    // // NvDsUserMetaList *usrMetaList = obj_meta->obj_user_meta_list;
    // for (NvDsUserMetaList *usrMetaList_p = &usrMetaList; usrMetaList_p != NULL; usrMetaList_p = usrMetaList_p->next)
    // {
        // g_print(" Got usrMetaList after JPEG encoding.\n");
        // NvDsUserMeta *usrMetaData_p = (NvDsUserMeta *) usrMetaList_p->data;
        // if (usrMetaData.base_meta.meta_type == NVDS_CROP_IMAGE_META)
        // {
            // NvDsObjEncOutParams *enc_jpeg_image =
                // (NvDsObjEncOutParams *) usrMetaData.user_meta_data;
            // enc_jpeg_image->outBuffer << -- Encoded JPEG image
            // enc_jpeg_image->outLen    << -- Encoded JPEG image length
            g_print(" usrMetaList after JPEG encoding  HAS AN IMAGE!!! %d bytes.\n", enc_jpeg_image.outLen);
            // break;
        // }
    // }

    NvBufSurfaceUnMap(dst_surface, 0, 0);
    NvBufSurfaceDestroy(dst_surface);
    cudaStreamDestroy(cuda_stream);
    gst_buffer_unmap(buf, &in_map_info);

}

/* osd_sink_pad_buffer_probe  will extract metadata received on OSD sink pad
 * and update params for drawing rectangle, object information etc. */

static GstPadProbeReturn
osd_sink_pad_buffer_probe (GstPad * pad, GstPadProbeInfo * info,
    gpointer u_data)
{
  GstBuffer *buf = (GstBuffer *) info->data;
  NvDsObjectMeta *obj_meta = NULL;
  NvDsObjectMeta *temp_meta = NULL;
  NvDsObjectMeta *obj_meta_inrecord[MAX_ELEMENTS_IN_DISPLAY_META] = { NULL };
  // guint vehicle_count = 0;
  // guint person_count = 0;
  // guint lp_count = 0;
  guint label_i = 0;
  guint current_object = 0;
  guint source_id = 0;
  // guint temptop = 0, templeft = 0;
  NvDsMetaList * l_frame = NULL;
  NvDsMetaList * l_obj = NULL;
  NvDsMetaList * l_class = NULL;
  NvDsMetaList * l_label = NULL;
  NvDsDisplayMeta *display_meta = NULL;
  NvDsClassifierMeta * class_meta = NULL;
  NvDsLabelInfo * label_info = NULL;
  GstClockTime now;
  // perf_measure * perf = (perf_measure *)(u_data);
  deepstream_meta_output probe_output;

  bool got_jpeg = false;
  struct cudaDeviceProp prop;
  int current_device = -1;

  cudaGetDevice(&current_device);
  cudaGetDeviceProperties(&prop, current_device);

  NvDsBatchMeta *batch_meta = gst_buffer_get_nvds_batch_meta (buf);
  // guint cam_width = MUXER_OUTPUT_WIDTH/tiler_columns;
  // guint cam_height = MUXER_OUTPUT_HEIGHT/tiler_rows;

  now = g_get_monotonic_time(); // maybe should use gst_date_time_new_now_local_time?

  //g_print ("---- Entering osd_sink_pad_buffer_probe, time = %lld\n", now);

#if 0
  if (perf->pre_time == GST_CLOCK_TIME_NONE) 
  {
    perf->pre_time = now;
    perf->total_time = GST_CLOCK_TIME_NONE;
  } 
  else 
  {
    if (perf->total_time == GST_CLOCK_TIME_NONE) 
    {
      perf->total_time = (now - perf->pre_time);
    } 
    else 
    {
      perf->total_time += (now - perf->pre_time);
    }
    perf->pre_time = now;
    perf->count++;
  }
#endif

  for (l_frame = batch_meta->frame_meta_list; l_frame != NULL;
       l_frame = l_frame->next) 
  {
    NvDsFrameMeta *frame_meta = (NvDsFrameMeta *) (l_frame->data);
    int offset = 0;
    if (!frame_meta)
      continue;
  
    memset(&probe_output, 0, sizeof(probe_output));
    
    // source_id = l_frame->source_id; // might need to use the pad_index instead.
    source_id = frame_meta->source_id;
    // source_id = ((NvDsFrameMetaList *)(l_frame))->source_id; // might need to use the pad_index instead.
      
    for (l_obj = frame_meta->obj_meta_list; (l_obj != NULL); //  && (probe_output.num_objects < MAX_ELEMENTS_IN_DISPLAY_META);
         l_obj = l_obj->next) 
    {
      obj_meta = (NvDsObjectMeta *) (l_obj->data);

      if (!obj_meta)
        continue;

      current_object = probe_output.num_objects;
      NvDsUserMetaList *usrMetaList = obj_meta->obj_user_meta_list;
      
      /* Check that the object has been detected by the primary detector
      * and that the class id is that of vehicles/persons. */
      if (obj_meta->unique_component_id == PRIMARY_DETECTOR_UID) 
      {

        if (current_object < MAX_ELEMENTS_IN_DISPLAY_META && (obj_meta->class_id == PGIE_CLASS_ID_VEHICLE) || (obj_meta->class_id == PGIE_CLASS_ID_PERSON))
        {
            //record the object pointer so that some child object can find its parent
            obj_meta_inrecord[current_object] = obj_meta;
            
            if (obj_meta->class_id == PGIE_CLASS_ID_VEHICLE)
            {
                probe_output.vehicle_count++;
                probe_output.object[current_object].objecttype = OBJECT_VHICLE;
                
            }
            else
            {
                probe_output.person_count++;
                probe_output.object[current_object].objecttype = OBJECT_PERSON;
            }
            
            // get the camera number and location based on the coordinate
            probe_output.object[current_object].cameranumber = source_id; 
            probe_output.object[current_object].dtop = obj_meta->rect_params.top;
            probe_output.object[current_object].dleft = obj_meta->rect_params.left;

            // memcpy(probe_output.cpuid, mycpuid, sizeof(mycpuid));
            probe_output.object[current_object].confidence = obj_meta->confidence;
            probe_output.num_objects++;
            
        }
      }
    }

    if (probe_output.num_objects > 0 && probe_output.num_objects < MAX_ELEMENTS_IN_DISPLAY_META)
    {
        for (l_obj = frame_meta->obj_meta_list; l_obj != NULL; l_obj = l_obj->next) 
        {
            
          obj_meta = (NvDsObjectMeta *) (l_obj->data);

          if (!obj_meta)
            continue;

          // Doug - Set bounding box color
          obj_meta->rect_params.border_color.blue = ((obj_meta->object_id | 1) |
                           (obj_meta->object_id >> 8  | 0x1) |
                           (obj_meta->object_id >> 16 | 0x1) |
                           (obj_meta->object_id >> 24 | 0x1)) << 4;

          obj_meta->rect_params.border_color.green = ((obj_meta->object_id >> 1  | 1) |
                           (obj_meta->object_id >> 9  | 0x1) |
                           (obj_meta->object_id >> 17 | 0x1) |
                           (obj_meta->object_id >> 25 | 0x1)) << 4;

          obj_meta->rect_params.border_color.red = ((obj_meta->object_id >> 2  | 1) |
                           (obj_meta->object_id >> 10  | 0x1) |
                           (obj_meta->object_id >> 18 | 0x1) |
                           (obj_meta->object_id >> 26 | 0x1)) << 4;

          obj_meta->rect_params.border_color.alpha = 1;

          if (obj_meta->unique_component_id == SECONDARY_DETECTOR_UID) 
          {
            if (obj_meta->class_id == SGIE_CLASS_ID_LPD) 
            {
              probe_output.plate_count++;
              /* Print this info only when operating in secondary model. */
              if (obj_meta->parent)
                printf("License plate found for parent object %p (type=%s)\n",
                  obj_meta->parent, pgie_classes_str[obj_meta->parent->class_id]);

              obj_meta->text_params.set_bg_clr = 1;
              obj_meta->text_params.text_bg_clr.red = 0.0;
              obj_meta->text_params.text_bg_clr.green = 0.0;
              obj_meta->text_params.text_bg_clr.blue = 0.0;
              obj_meta->text_params.text_bg_clr.alpha = 0.0;

              obj_meta->text_params.font_params.font_color.red = 1.0;
              obj_meta->text_params.font_params.font_color.green = 1.0;
              obj_meta->text_params.font_params.font_color.blue = 0.0;
              obj_meta->text_params.font_params.font_color.alpha = 1.0;
              obj_meta->text_params.font_params.font_size = 12;
            }
          }

          for (l_class = obj_meta->classifier_meta_list; l_class != NULL;
               l_class = l_class->next) 
          {
            class_meta = (NvDsClassifierMeta *)(l_class->data);
            
            if (!class_meta)
              continue;
            
            if (class_meta->unique_component_id == SECONDARY_CLASSIFIER_UID) 
            {
              for ( label_i = 0, l_label = class_meta->label_info_list;
                label_i < class_meta->num_labels && l_label; label_i++,
                l_label = l_label->next) 
              {
                label_info = (NvDsLabelInfo *)(l_label->data);
                if (label_info) 
                {
                  if (label_info->label_id == 0 && label_info->result_class_id == 1) 
                  {
                    printf("Plate License %s\n",label_info->result_label);
                    printf("License plate found for object type=%s\n", pgie_classes_str[obj_meta->class_id]);
                    temp_meta = (obj_meta->unique_component_id == PRIMARY_DETECTOR_UID) ? obj_meta : obj_meta->parent;
                    
                    for(current_object = 0; current_object < probe_output.num_objects; current_object++)
                    {
                        if((probe_output.object[current_object].objecttype = OBJECT_VHICLE) && (temp_meta == obj_meta_inrecord[current_object]))
                        {
                            sprintf(probe_output.object[current_object].lc_string, "%s", label_info->result_label);
                            probe_output.extracted_plate_count++;
                            break;
                        }
                    }
                  }
                }
              }
            }
          }
          
        }
    }

    display_meta = nvds_acquire_display_meta_from_pool(batch_meta);
    NvOSD_TextParams *txt_params  = &display_meta->text_params[0];
    display_meta->num_labels = 1;
    txt_params->display_text = (char*) g_malloc0 (MAX_DISPLAY_LEN);
    offset = snprintf(txt_params->display_text, MAX_DISPLAY_LEN,
                 "Person = %d ", probe_output.person_count);
    offset += snprintf(txt_params->display_text + offset , MAX_DISPLAY_LEN,
                 "Vehicle = %d ", probe_output.vehicle_count);

    /* Now set the offsets where the string should appear */
    txt_params->x_offset = 10;
    txt_params->y_offset = 12;

    /* Font , font-color and font-size */
    char font_n[6];
    snprintf(font_n, 6, "Serif");
    txt_params->font_params.font_name = font_n;
    txt_params->font_params.font_size = 10;
    txt_params->font_params.font_color.red = 1.0;
    txt_params->font_params.font_color.green = 1.0;
    txt_params->font_params.font_color.blue = 1.0;
    txt_params->font_params.font_color.alpha = 1.0;

    /* Text background color */
    txt_params->set_bg_clr = 1;
    txt_params->text_bg_clr.red = 0.0;
    txt_params->text_bg_clr.green = 0.0;
    txt_params->text_bg_clr.blue = 0.0;
    txt_params->text_bg_clr.alpha = 1.0;

    nvds_add_display_meta_to_frame(frame_meta, display_meta);

    if((probe_output.plate_count > 0) || (probe_output.person_count > 0))
        g_print ("Frame Number = %d of Stream = %d, Vehicle Count = %d, Person Count = %d, License Plate Count = %d ************* \n\n",
             frame_number, frame_meta->pad_index, probe_output.vehicle_count, probe_output.person_count,
             probe_output.plate_count);
  
    probe_output.frame_time = now;
  
    // copy the information to a meta data structure, expect the data collection process to process it in time.
    if(((deep_meta_buff_head+1)%MAX_META_BUFFS) != deep_meta_buff_tail)
    {
      memcpy(&meta_buff[deep_meta_buff_head], &probe_output, sizeof(probe_output));
      deep_meta_buff_head++;
      if(deep_meta_buff_head >= MAX_META_BUFFS) deep_meta_buff_head = 0;
    }

    frame_number++;
    // total_plate_number += probe_output.plate_count;
#define CAPTURE_IMAGE_W_OBJ 1
#ifdef CAPTURE_IMAGE_W_OBJ
    if (probe_output.person_count > 0)
    {
        // nvds_obj_enc_process() requires NV12 color format & nvdsosd element is RGBA
        if (got_jpeg == false)
        {
            got_jpeg = true;
            cvt_nvbufsurf_rgba_to_nv12(buf, frame_meta, batch_meta);
        }
        g_print("osd_sink_pad_buffer_probe: Encoded JPEG frame.");
    }
#endif
  }
  

  return GST_PAD_PROBE_OK;
}

/* pgie_src_pad_buffer_probe will extract metadata received on pgie src pad
 * and update params for drawing rectangle, object information etc. We also
 * iterate through the object list and encode the cropped objects as jpeg
 * images and attach it as user meta to the respective objects.*/
static GstPadProbeReturn
pgie_src_pad_buffer_probe (GstPad * pad, GstPadProbeInfo * info, gpointer ctx)
{
  GstBuffer *buf = (GstBuffer *) info->data;
  GstMapInfo inmap = GST_MAP_INFO_INIT;
  if (!gst_buffer_map (buf, &inmap, GST_MAP_READ)) {
    GST_ERROR ("input buffer mapinfo failed");
    return GST_PAD_PROBE_DROP;
  }
  NvBufSurface *ip_surf = (NvBufSurface *) inmap.data;
  gst_buffer_unmap (buf, &inmap);

  NvDsObjectMeta *obj_meta = NULL;
  guint vehicle_count = 0;
  guint person_count = 0;
  NvDsMetaList *l_frame = NULL;
  NvDsMetaList *l_obj = NULL;
  NvDsBatchMeta *batch_meta = gst_buffer_get_nvds_batch_meta (buf);
  NvDsObjectMeta image_meta;
  bool do_jpeg_enc = TRUE;

  for (l_frame = batch_meta->frame_meta_list; l_frame != NULL;
      l_frame = l_frame->next) {
    NvDsFrameMeta *frame_meta = (NvDsFrameMeta *) (l_frame->data);
    guint num_rects = 0;
    for (l_obj = frame_meta->obj_meta_list; l_obj != NULL; l_obj = l_obj->next) {
      obj_meta = (NvDsObjectMeta *) (l_obj->data);
      if (obj_meta->class_id == PGIE_CLASS_ID_VEHICLE) {
        vehicle_count++;
        num_rects++;
      }
      if (obj_meta->class_id == PGIE_CLASS_ID_PERSON) {
        person_count++;
        num_rects++;
      }
      /* Conditions that user needs to set to encode the detected objects of
       * interest. Here, by default all the detected objects are encoded.
       * For demonstration, we will encode the first object in the frame */
      if (do_jpeg_enc == TRUE &&
          (obj_meta->class_id == PGIE_CLASS_ID_PERSON ||
           obj_meta->class_id == PGIE_CLASS_ID_VEHICLE)
          && num_rects == 1) {
        NvDsObjEncUsrArgs userData = { 0 };
        memset(&image_meta, 0, sizeof(image_meta));
        // image_meta.base_meta.batch_meta   = obj_meta->base_meta.batch_meta;
        image_meta.base_meta.meta_type    = NVDS_CROP_IMAGE_META;
        image_meta.base_meta.uContext     = obj_meta->base_meta.uContext;
        image_meta.base_meta.copy_func    = obj_meta->base_meta.copy_func;
        image_meta.base_meta.release_func = obj_meta->base_meta.release_func;
        image_meta.class_id   = obj_meta->class_id;
        image_meta.confidence = obj_meta->confidence;
        image_meta.rect_params.left   = 0.0;
        image_meta.rect_params.top    = 0.0;
        image_meta.rect_params.width  = 960; // MUXER_OUTPUT_WIDTH;
        image_meta.rect_params.height = 540; // MUXER_OUTPUT_HEIGHT;
        /* To be set by user */
        userData.saveImg = FALSE; // TRUE; // save_img;
        userData.attachUsrMeta = FALSE; //  TRUE; // attach_user_meta;
        /* Set if Image scaling Required */
        userData.scaleImg     = TRUE;
        userData.scaledWidth  = 960;
        userData.scaledHeight = 540;
        /* Preset */
        userData.objNum = 1; // num_rects;
        /* Quality */
        userData.quality = 80;
        /*Main Function Call */
        nvds_obj_enc_process((NvDsObjEncCtxHandle)ctx, &userData, ip_surf, &image_meta, frame_meta);
        g_print ("JPEG Encoding frame with object detected.\n");
        nvds_obj_enc_finish((NvDsObjEncCtxHandle)ctx);
        g_print ("JPEG FINISHED Encoding frame with object detected.\n");
        do_jpeg_enc = FALSE;
      }
    }
  }
  // nvds_obj_enc_finish ((NvDsObjEncCtxHandle)ctx);
  return GST_PAD_PROBE_OK;
}

static gboolean
bus_call (GstBus * bus, GstMessage * msg, gpointer data)
{
  GMainLoop *loop = (GMainLoop *) data;
  switch (GST_MESSAGE_TYPE (msg)) {
    case GST_MESSAGE_EOS:
      g_print ("End of stream\n");
      g_main_loop_quit (loop);
      break;
    case GST_MESSAGE_WARNING:
    {
      gchar *debug;
      GError *error;
      gst_message_parse_warning (msg, &error, &debug);
      g_printerr ("WARNING from element %s: %s\n",
          GST_OBJECT_NAME (msg->src), error->message);
      g_free (debug);
      g_printerr ("Warning: %s\n", error->message);
      g_error_free (error);
      break;
    }
    case GST_MESSAGE_ERROR:
    {
      gchar *debug;
      GError *error;
      gst_message_parse_error (msg, &error, &debug);
      g_printerr ("ERROR from element %s: %s\n",
          GST_OBJECT_NAME (msg->src), error->message);
      if (debug)
        g_printerr ("Error details: %s\n", debug);
      g_free (debug);
      g_error_free (error);
      g_main_loop_quit (loop);
      break;
    }
    case GST_MESSAGE_ELEMENT:
    {
      if (gst_nvmessage_is_stream_eos (msg)) {
        guint stream_id;
        if (gst_nvmessage_parse_stream_eos (msg, &stream_id)) {
          g_print ("Got EOS from stream %d\n", stream_id);
        }
      }
      break;
    }
    default:
      break;
  }
  return TRUE;
}

static void
cb_newpad (GstElement * decodebin, GstPad * decoder_src_pad, gpointer data)
{
  GstCaps *caps = gst_pad_get_current_caps (decoder_src_pad);
  const GstStructure *str = gst_caps_get_structure (caps, 0);
  const gchar *name = gst_structure_get_name (str);
  GstElement *source_bin = (GstElement *) data;
  GstCapsFeatures *features = gst_caps_get_features (caps, 0);

  /* Need to check if the pad created by the decodebin is for video and not
   * audio. */
  if (!strncmp (name, "video", 5)) {
    /* Link the decodebin pad only if decodebin has picked nvidia
     * decoder plugin nvdec_*. We do this by checking if the pad caps contain
     * NVMM memory features. */
    if (gst_caps_features_contains (features, GST_CAPS_FEATURES_NVMM)) {
      /* Get the source bin ghost pad */
      GstPad *bin_ghost_pad = gst_element_get_static_pad (source_bin, "src");
      if (!gst_ghost_pad_set_target (GST_GHOST_PAD (bin_ghost_pad),
              decoder_src_pad)) {
        g_printerr ("Failed to link decoder src pad to source bin ghost pad\n");
      }
      gst_object_unref (bin_ghost_pad);
    } else {
      g_printerr ("Error: Decodebin did not pick nvidia decoder plugin.\n");
    }
  }
}

static void
decodebin_child_added (GstChildProxy * child_proxy, GObject * object,
    gchar * name, gpointer user_data)
{
  if (g_strrstr (name, "decodebin") == name) {
    g_signal_connect (G_OBJECT (object), "child-added",
        G_CALLBACK (decodebin_child_added), user_data);
  }
}

static GstElement *
create_source_bin (guint index, gchar * uri)
{
  GstElement *bin = NULL, *uri_decode_bin = NULL;
  gchar bin_name[16] = { };

  g_snprintf (bin_name, 15, "source-bin-%02d", index);
  /* Create a source GstBin to abstract this bin's content from the rest of the
   * pipeline */
  bin = gst_bin_new (bin_name);

  /* Source element for reading from the uri.
   * We will use decodebin and let it figure out the container format of the
   * stream and the codec and plug the appropriate demux and decode plugins. */
  uri_decode_bin = gst_element_factory_make ("uridecodebin", "uri-decode-bin");

  if (!bin || !uri_decode_bin) {
    g_printerr ("One element in source bin could not be created.\n");
    return NULL;
  }

  /* We set the input uri to the source element */
  g_object_set (G_OBJECT (uri_decode_bin), "uri", uri, NULL);

  /* Connect to the "pad-added" signal of the decodebin which generates a
   * callback once a new pad for raw data has beed created by the decodebin */
  g_signal_connect (G_OBJECT (uri_decode_bin), "pad-added",
      G_CALLBACK (cb_newpad), bin);
  g_signal_connect (G_OBJECT (uri_decode_bin), "child-added",
      G_CALLBACK (decodebin_child_added), bin);

  gst_bin_add (GST_BIN (bin), uri_decode_bin);

  /* We need to create a ghost pad for the source bin which will act as a proxy
   * for the video decoder src pad. The ghost pad will not have a target right
   * now. Once the decode bin creates the video decoder and generates the
   * cb_newpad callback, we will set the ghost pad target to the video decoder
   * src pad. */
  if (!gst_element_add_pad (bin, gst_ghost_pad_new_no_target ("src",
              GST_PAD_SRC))) {
    g_printerr ("Failed to add ghost pad in source bin\n");
    return NULL;
  }

  return bin;
}

int
main (int argc, char *argv[])
{
  GMainLoop *loop = NULL;
  GstElement *pipeline = NULL, *streammux = NULL, *sink = NULL, *pgie = NULL,
      *nvvidconv = NULL, *nvosd = NULL, *tiler = NULL;
  GstElement *transform = NULL;
  GstBus *bus = NULL;
  guint bus_watch_id;
  GstPad *pgie_src_pad = NULL;
  GstPad *osd_sink_pad = NULL;
  guint i, num_sources;
  guint tiler_rows, tiler_columns;
  guint pgie_batch_size;

  int current_device = -1;
  cudaGetDevice(&current_device);
  struct cudaDeviceProp prop;
  cudaGetDeviceProperties(&prop, current_device);

  /* Check input arguments */
  if (argc < 2) {
    g_printerr ("Usage: %s <uri1> [uri2] ... [uriN] \n", argv[0]);
    return -1;
  }
  num_sources = argc - 1;

  /* Standard GStreamer initialization */
  gst_init (&argc, &argv);
  loop = g_main_loop_new (NULL, FALSE);

  /* Create gstreamer elements */
  /* Create Pipeline element that will form a connection of other elements */
  pipeline = gst_pipeline_new ("ds-image-meta-test-pipeline");

  /* Create nvstreammux instance to form batches from one or more sources. */
  streammux = gst_element_factory_make ("nvstreammux", "stream-muxer");

  if (!pipeline || !streammux) {
    g_printerr ("One element could not be created. Exiting.\n");
    return -1;
  }
  gst_bin_add (GST_BIN (pipeline), streammux);

  for (i = 0; i < num_sources; i++) {
    GstPad *sinkpad, *srcpad;
    gchar pad_name[16] = { };
    GstElement *source_bin = create_source_bin (i, argv[i + 1]);

    if (!source_bin) {
      g_printerr ("Failed to create source bin. Exiting.\n");
      return -1;
    }

    gst_bin_add (GST_BIN (pipeline), source_bin);

    g_snprintf (pad_name, 15, "sink_%u", i);
    sinkpad = gst_element_get_request_pad (streammux, pad_name);
    if (!sinkpad) {
      g_printerr ("Streammux request sink pad failed. Exiting.\n");
      return -1;
    }

    srcpad = gst_element_get_static_pad (source_bin, "src");
    if (!srcpad) {
      g_printerr ("Failed to get src pad of source bin. Exiting.\n");
      return -1;
    }

    if (gst_pad_link (srcpad, sinkpad) != GST_PAD_LINK_OK) {
      g_printerr ("Failed to link source bin to stream muxer. Exiting.\n");
      return -1;
    }

    gst_object_unref (srcpad);
    gst_object_unref (sinkpad);
  }

  /* Use nvinfer to infer on batched frame. */
  pgie = gst_element_factory_make ("nvinfer", "primary-nvinference-engine");

  /* Use nvtiler to composite the batched frames into a 2D tiled array based
   * on the source of the frames. */
  tiler = gst_element_factory_make ("nvmultistreamtiler", "nvtiler");

  /* Use convertor to convert from NV12 to RGBA as required by nvosd */
  nvvidconv = gst_element_factory_make ("nvvideoconvert", "nvvideo-converter");

  /* Create OSD to draw on the converted RGBA buffer */
  nvosd = gst_element_factory_make ("nvdsosd", "nv-onscreendisplay");

  /* Finally render the osd output */
  if(prop.integrated) {
    transform = gst_element_factory_make ("nvegltransform", "nvegl-transform");
  }
  sink = gst_element_factory_make ("nveglglessink", "nvvideo-renderer");

  if (!pgie || !tiler || !nvvidconv || !nvosd || !sink) {
    g_printerr ("One element could not be created. Exiting.\n");
    return -1;
  }

  if(!transform && prop.integrated) {
    g_printerr ("One tegra element could not be created. Exiting.\n");
    return -1;
  }

  g_object_set (G_OBJECT (streammux), "width", MUXER_OUTPUT_WIDTH, "height",
      MUXER_OUTPUT_HEIGHT, "batch-size", num_sources,
      "batched-push-timeout", MUXER_BATCH_TIMEOUT_USEC, NULL);

  /* Configure the nvinfer element using the nvinfer config file. */
  g_object_set (G_OBJECT (pgie),
      "config-file-path", "ds_image_meta_pgie_config.txt", NULL);

  /* Override the batch-size set in the config file with the number of sources. */
  g_object_get (G_OBJECT (pgie), "batch-size", &pgie_batch_size, NULL);
  if (pgie_batch_size != num_sources) {
    g_printerr
        ("WARNING: Overriding infer-config batch-size (%d) with number of sources (%d)\n",
        pgie_batch_size, num_sources);
    g_object_set (G_OBJECT (pgie), "batch-size", num_sources, NULL);
  }

  tiler_rows = (guint) sqrt (num_sources);
  tiler_columns = (guint) ceil (1.0 * num_sources / tiler_rows);
  /* we set the tiler properties here */
  g_object_set (G_OBJECT (tiler), "rows", tiler_rows, "columns", tiler_columns,
      "width", TILED_OUTPUT_WIDTH, "height", TILED_OUTPUT_HEIGHT, NULL);

  /* we add a message handler */
  bus = gst_pipeline_get_bus (GST_PIPELINE (pipeline));
  bus_watch_id = gst_bus_add_watch (bus, bus_call, loop);
  gst_object_unref (bus);

  /* Set up the pipeline */
  /* we add all elements into the pipeline */
  if(prop.integrated) {
    gst_bin_add_many (GST_BIN (pipeline), pgie, tiler, nvvidconv, nvosd,
        transform, sink, NULL);
    /* we link the elements together
    * nvstreammux -> nvinfer -> nvtiler -> nvvidconv -> nvosd -> video-renderer */
    if (!gst_element_link_many (streammux, pgie, tiler, nvvidconv, nvosd,
            transform, sink, NULL)) {
      g_printerr ("Elements could not be linked. Exiting.\n");
      return -1;
    }
  }
  else {
    gst_bin_add_many (GST_BIN (pipeline), pgie, tiler, nvvidconv, nvosd, sink,
        NULL);
    /* we link the elements together
    * nvstreammux -> nvinfer -> nvtiler -> nvvidconv -> nvosd -> video-renderer */
    if (!gst_element_link_many (streammux, pgie, tiler, nvvidconv, nvosd, sink,
            NULL)) {
      g_printerr ("Elements could not be linked. Exiting.\n");
      return -1;
    }
  }
  /* Lets add probe to get informed of the meta data generated, we add probe to
   * the srd pad of the pgie element, since by that time, the buffer would have
   * had got all the nvinfer metadata. */
  pgie_src_pad = gst_element_get_static_pad (pgie, "src");
  /*Creat Context for Object Encoding */
  NvDsObjEncCtxHandle obj_ctx_handle = nvds_obj_enc_create_context ();
  if (!obj_ctx_handle) {
    g_print ("Unable to create context\n");
    return -1;
  }
  if (!pgie_src_pad)
    g_print ("Unable to get src pad\n");
  else
    gst_pad_add_probe (pgie_src_pad, GST_PAD_PROBE_TYPE_BUFFER,
        pgie_src_pad_buffer_probe, (gpointer) obj_ctx_handle, NULL);
  gst_object_unref (pgie_src_pad);

  /* Lets add probe to get informed of the meta data generated, we add probe to
   * the sink pad of the osd element, since by that time, the buffer would have
   * had got all the metadata. */
  osd_sink_pad = gst_element_get_static_pad (nvosd, "sink");
  if (!osd_sink_pad)
    g_print ("Unable to get sink pad\n");
  else
    gst_pad_add_probe (osd_sink_pad, GST_PAD_PROBE_TYPE_BUFFER,
        osd_sink_pad_buffer_probe, (gpointer) obj_ctx_handle, NULL);
  gst_object_unref (osd_sink_pad);

  /* Set the pipeline to "playing" state */
  g_print ("Now playing:");
  for (i = 0; i < num_sources; i++) {
    g_print (" %s,", argv[i + 1]);
  }
  g_print ("\n");
  gst_element_set_state (pipeline, GST_STATE_PLAYING);

  /* Wait till pipeline encounters an error or EOS */
  g_print ("Running...\n");
  g_main_loop_run (loop);

  /* Destroy context for Object Encoding */
  nvds_obj_enc_destroy_context (obj_ctx_handle);

  /* Out of the main loop, clean up nicely */
  g_print ("Returned, stopping playback\n");
  gst_element_set_state (pipeline, GST_STATE_NULL);
  g_print ("Deleting pipeline\n");
  gst_object_unref (GST_OBJECT (pipeline));
  g_source_remove (bus_watch_id);
  g_main_loop_unref (loop);
  return 0;
}

Here’s the command line for compiling/linking:

g++ -o nvdsosd_to_jpeg nvidia_extract.cpp -Wall -std=c++11 -fPIC -Wno-error=deprecated-declarations -D_GLIBCXX_USE_CXX11_ABI=1 -g -DPLATFORM_TEGRA -I/opt/nvidia/deepstream/deepstream/sources/includes -I/usr/include/gstreamer-1.0 -I/usr/include -I/usr/include/glib-2.0 -I/usr/include -I/usr/include/arm-linux-gnueabihf -I/usr/lib/aarch64-linux-gnu/glib-2.0/include -I../src -I/usr/local/cuda/targets/aarch64-linux/include -Wl,--start-group -lnvdsgst_meta -lnvds_meta -lnvbufsurface -lnvbufsurftransform -lnvdsgst_helper -lnvds_batch_jpegenc -lm -lstdc++ -lglib-2.0 -lgio-2.0 -lgstreamer-1.0 -lgobject-2.0 -lgmodule-2.0 -lgthread-2.0 -lgstapp-1.0 -lgstaudio-1.0 -Wl,--end-group -Wl,-rpath,/opt/nvidia/deepstream/deepstream/lib -L/opt/nvidia/deepstream/deepstream/lib -L./ -L/usr/lib/arm-linux-gnueabihf -L../../../bin -L../../../3rdParty/boost/boost_1_79_0/stage/lib -L/usr/local/cuda/targets/aarch64-linux/lib -L/usr/local/cuda-10.2/lib64 -lboost_thread -lnvdsgst_meta -lnvds_meta -lnvbufsurface -lnvbufsurftransform -lnvdsgst_helper -lnvds_batch_jpegenc -lm -lstdc++ -lglib-2.0 -lgio-2.0 -lgstreamer-1.0 -lgobject-2.0 -lgmodule-2.0 -lgthread-2.0 -lgstapp-1.0 -lgstaudio-1.0 -lcudart -lm -lcuda

Thanks,

  • Doug

Hi @doug4350 ,
1.We suggest you refer the demo code when using the jpeg enc code. You can use the finish API at last instead of each time.
2.The nvds_obj_enc_process function uses the nvds_acquire_user_meta_from_pool internally. So if you use some resources incorrectly, it maybe crash when you use the finish API.

Hi @yuweiw,
So, were you/your team able to reproduce the SIGSEGV from debugging my simple demo code as you said?

  1. This entire sample I provided is from demo code (deepstream_image_meta_test.c) which I fixed up to encode the entire frame instead of bounding boxes.
    Doesn’t the nvds_obj_enc_process release jpeg encoding resources making the encoded jpeg image available? When I check (NvDsObjEncOutParams *)user_meta->user_meta_data before the finish API and after nvds_obj_enc_process, both outBuffer is NULL and outLen is 0. Something else must be needed to get the encoded jpeg image if finish API is not called until streaming ends.

  2. That’s obvious, since that’s where the SIGSEGV is trapped. Can you provide a working demo or correct resource usage for encoding the entire image. The documentation is not clear and existing demo examples are insufficient.

Thanks,

  • Doug

You can refer the source code:pgie_src_pad_buffer_probe
flow in deepstream_image_meta_test.c. Just modify the code as follows:

add the width and height to obj_meta:
        /* Set if Image scaling Required */
        userData.scaleImg = FALSE;
        userData.scaledWidth = 0;
        userData.scaledHeight = 0;

+	obj_meta->rect_params.width = frame_meta->source_frame_width;
 +       obj_meta->rect_params.height = frame_meta->source_frame_height;
+	obj_meta->rect_params.top = 0.0f ;
+       obj_meta->rect_params.left = 0.0f ;

        /* Preset */
        userData.objNum = num_rects;

Thank you for your help @yuweiw.
I got it to work.
After cleaning up the code from testing and moving the encoding back to pgie_src_pad_buffer_probe it is working now and obj_meta->obj_user_meta_list->data contains the expected NvDsObjeEncOutParams object with outBuffer and outLen populated accordingly after enc_finish.

  • Doug

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.