Need some tips on debugging deepstream-app-based C++ application

• Hardware Platform (Jetson / GPU)
Jetson Xavier NX

• DeepStream Version
6.0

• JetPack Version (valid for Jetson only)
4.6

• Issue Type( questions, new requirements, bugs)
question

I’m trying to improve accuracy of my pipeline that detects people with faces (not just people).
As my code is based on deepstream-app, a lot of pipeline’s internals in hidden or pretty complex as it supports multiple sources, various formats, different sinks etc.

While debugging my app I needed an ability to save not only detected objects but a whole frame with all boxes/labels drawn. I tried attaching a probe function to various elements of the pipeline (nvosd, transformer etc) (generated its graph first) but it either saves a picture without any drawings or crashes.
So my first question is how one can achieve this goal?

As I’m trying to find some patterns to filter out false positives (person with someone else’s face in its bbox), it would be nice to see each frame with all drawings on the screen paused AND proceed frame by frame.
The app allows to pause/resume playback and it works well but one cannot achieve frame-by-frame accuracy. I tried approach from this tutorial and it doesn’t work as expected - I use appCtx[0]->pipeline.tiler_tee as data.video_sink and it does make one step if I pause the pipeline and then press ‘n’ but after that I have to resume the playback and pause it again to perform another step so it’s not frame-by-frame solution yet.
Can anyone advise me on how to implement that step idea to deepstream-app so I have frame-accurate steps?
And speaking about steps and other trick modes, I was unable to change my pipeline’s playback speed at all, how can it be done (as sometimes it would be great to slow it down a bit)?

Please refer to deepstream-image-meta-test sample for how to save cropped images.
You can use many gstreamer debug tools to debug deepstream apps. deepstream-app is open source. Basic tutorial 11: Debugging tools (gstreamer.freedesktop.org)

I was asking about saving the whole frame with all bboxes drawn, i.e what I see on my screen if I enable the corresponding sink. deepstream-image-meta-test saves only cropped images of the original frame with no added OSD data.

1 Like

The bbox was drawn on frame by nvdsosd, so you can dump the output of nvdsosd if you record which frame should be dump in your app. The object APIs can be used to save the whole frame when you set the object as the frame.

bool nvds_obj_enc_process ( NvDsObjEncCtxHandle ,
NvDsObjEncUsrArgs * ,
NvBufSurface * ,
NvDsObjectMeta * ,
NvDsFrameMeta *
)

If you set the NvDsObjectMeta * as the whole frame, then the frame will be saved.

DeepStream is just a SDK, the APIs can be used to implement your own app.

Well, from the samples I know that we can save the whole frame as a dummy object.
The main problem I have right now is where to attach the probe function for it to work and how to find out why call of nvds_obj_enc_process breaks my app.
Here’s the code that attaches the probe function to the src pad:

GstElement* elem = appCtx[0]->pipeline.instance_bins[0].osd_bin.nvosd;
        if (elem) {
                g_print ("trying to add save_rendered_frame_probe..\n");

                GstPad* _pad = gst_element_get_static_pad (elem, "src");
                if (!_pad)
                    NVGSTDS_ERR_MSG_V ("Unable to get src pad\n");
                else {
                    gst_pad_add_probe (_pad, GST_PAD_PROBE_TYPE_BUFFER,
                        save_rendered_frame_probe, NULL, NULL);
                    g_print ("done!\n");
                    gst_object_unref (_pad);
                }
            } else {
                g_print ("cannot add save_rendered_frame_probe as elem is null\n");
                should_goto_done = 1;
            }

and here’s the probe

static GstPadProbeReturn
save_rendered_frame_probe (GstPad * pad, GstPadProbeInfo * info,
    gpointer u_data)
{
    std::string _path;

    GstBuffer *buf = (GstBuffer *) info->data;
    NvDsBatchMeta *batch_meta = gst_buffer_get_nvds_batch_meta (buf);

    GstMapInfo inmap = GST_MAP_INFO_INIT;
    if (!gst_buffer_map(buf, &inmap, GST_MAP_READ)) {
        std::cerr << "input buffer mapinfo failed\n";
        return GST_PAD_PROBE_OK;
    }
    NvBufSurface *ip_surf = (NvBufSurface *) inmap.data;
    gst_buffer_unmap(buf, &inmap);

    for (NvDsFrameMetaList* l_frame = batch_meta->frame_meta_list; l_frame != NULL;
       l_frame = l_frame->next) {
        NvDsFrameMeta *frame_meta = (NvDsFrameMeta *) (l_frame->data);

        if (!frame_meta)
            continue;

        NvDsObjectMeta dummy_obj_meta;
        dummy_obj_meta.rect_params.width = ip_surf->surfaceList[frame_meta->batch_id].width;
        dummy_obj_meta.rect_params.height = ip_surf->surfaceList[frame_meta->batch_id].height;
        dummy_obj_meta.rect_params.top = 0;
        dummy_obj_meta.rect_params.left = 0;

        NvDsObjEncUsrArgs userData = {0};

        userData.saveImg = true;
        userData.attachUsrMeta = false;
        std::ostringstream oss;
        oss << cropped_images_output_folder
            << path__source << frame_meta->frame_num << path__separator
            << (rstats.num_batches.seen)
            << path__extension;
        _path = oss.str ();
        _path.copy(userData.fileNameImg, _path.size());
        userData.fileNameImg[_path.size()] = '\0';
        userData.objNum = 0;
        userData.quality = 80;

        nvds_obj_enc_process(obj_ctx_handle,
                         &userData, ip_surf, &dummy_obj_meta, frame_meta);
    }

    nvds_obj_enc_finish (obj_ctx_handle);

    return GST_PAD_PROBE_OK;
}

As soon as code execution reaches the first nvds_obj_enc_process my app terminates with no error message and no core dump so I have no idea how to debug it further.

There is sample of correct place to use the interface.

Actually, I used that sample as an example and I can see nothing wrong in my code.

The difference is in the deepstream-image-meta-test they call nvds_obj_enc_process and nvds_obj_enc_finish after PGIE and save generated crops manually before osd so they don’t get any osd drawings in output.
As I need those drawings, I attach the probe to the src pad of nvosd and use nvds_obj_enc_process’s ability to save the crops by setting userData.saveImg to true.

It’s actually something simple but I don’t know what exactly to change to make it work.
It saves full frames perfectly (apart from missing drawings) if I add me probe to the sink pad of appCtx[0]->pipeline.instance_bins[0].osd_bin.bin:

GstElement* elem = appCtx[0]->pipeline.instance_bins[0].osd_bin.bin;
if (elem) {
    GstPad* _pad = gst_element_get_static_pad (elem, "sink");
    if (!_pad) {
        NVGSTDS_ERR_MSG_V ("Unable to get src pad\n");
        should_goto_done = TRUE;
    } else {
        gst_pad_add_probe (_pad, GST_PAD_PROBE_TYPE_BUFFER,
            save_rendered_frame_probe, (gpointer) obj_ctx_handle, NULL);
            gst_object_unref (_pad);
    }
} else {
    g_print ("cannot add save_rendered_frame_probe as elem is null\n");
    should_goto_done = TRUE;
}

but crashes is I add it to the src pad (i.e replace “sink” to “src” in gst_element_get_static_pad call.

Any idea why? Iere is my pipeline graph, just in case.
playing.pdf (51.1 KB)

And by the way, how to make my app to dump core if it crashes?

The osd_bin is constructed by nvvideoconvert and nvdsosd. After you “add it to the src pad”, the video format is converted from NV12 to RGBA, nvds_obj_enc_process() only support NV12 format encoding.

Plaese make sure to use nvds_obj_enc_process() with NV12 format.

Ok. I think nvvideoconvert is used to convert buffer into RGBA as nvdsosd takes that format as input.
The trouble is in the documentation I didn’t see anything about

and it’s not open source, is it?
Anyway, could you show me how to make sure it uses NV12 format?
Perhaps I need to add another nvvideoconvert after nvdsosd?

You can dump the graph to check the format if you don’t want to the source code line by line. DeepStream SDK FAQ - Intelligent Video Analytics / DeepStream SDK - NVIDIA Developer Forums

Yes.

I included the graph in my previous reply so yes, I know how to do that but that wasn’t my question.

That was a lot of help indeed, thank you.
Ok, I used deepstream-faciallandmark-app as an example how to construct a pipeline that converts OSD output into jpeg file, here’s my code

GstElement *sink = NULL, *nvvidconv1 = NULL,
    *outenc = NULL, *capfilt = NULL;
GstElement *queue6 = NULL, *queue7 = NULL;
GstCaps *caps = NULL;
gchar filename[]="debug/full_frame.jpg";

nvvidconv1 = gst_element_factory_make ("nvvideoconvert", "nvvid-converter1");
capfilt = gst_element_factory_make ("capsfilter", "nvvideo-caps");
queue6 = gst_element_factory_make ("queue", "queue6");
outenc = gst_element_factory_make ("jpegenc", "jpegenc");
queue7 = gst_element_factory_make ("queue", "queue7");
sink = gst_element_factory_make ("filesink", "nvvideo-renderer");

if (!nvvidconv1 || !sink  || !capfilt || !outenc || !queue6 || !queue7) {
    g_printerr ("One element could not be created. Exiting.\n");
    return;
}
        
caps =
    gst_caps_new_simple ("video/x-raw", "format", G_TYPE_STRING,
        "I420", NULL);
g_object_set (G_OBJECT (capfilt), "caps", caps, NULL);

g_object_set (G_OBJECT (sink), "sync", 0, "async", false,NULL);
g_object_set (G_OBJECT (sink), "location", filename,NULL);
g_object_set (G_OBJECT (sink), "enable-last-sample", false,NULL);
gst_bin_add_many (GST_BIN (appCtx[0]->pipeline.pipeline), nvvidconv1, outenc, capfilt, 
    queue6, queue7, NULL);

if (!appCtx[0]->pipeline.instance_bins[0].osd_bin.nvosd) {
    g_printerr ("nvosd=NULL\n");
    return;
}
        
if (!gst_element_link_many (appCtx[0]->pipeline.instance_bins[0].osd_bin.nvosd, queue6, nvvidconv1, capfilt, queue7,
    outenc, sink, NULL)) {
    g_printerr ("OSD and sink elements link failure.\n");
    return;
}

and it fails to link elements, I get “OSD and sink elements link failure.”
How can I make it work?
Just a remainder, the graph of my pipeline is in my previous message and my application is based on deepstream-app so it uses those NvDsPipeline structures.

Please debug by yourself.

That’s fantastic, honestly.
Nvidia provides no source code, no documentation, no examples and no slightest help to developers in forum apart from “please debug by yourself” - that’s the way to go.
That’s the way to gain new users of their solutions.
Definitely wouldn’t recommend Nvidia to any business based on that from now on.

3 Likes

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.