Deeepstream-test5 error when save image

Update
Hi @Amycao,
I hope this video can help you more easy to evaluate my current problem
the video demo :


*Note: Sorry if in the video contain music sound, I forgot to turn off it
and this is my running script of deepstream-transfer-learning deepstream_transfer_learning_app_main.cpp (37.3 KB)

I copied the configuration you used, just did some minor changes, like GPU ID, video file, i can repro your issue without tracker, with tracker enabled, the issue gone. I am not sure which part goes wrong on your side, you may do some experiments on your side to identify the root cause.
I will try deepstream-transfer-learning to see how it goes.

Just tried with deepstream-transfer-learning, I did not observe saved images black issue after change GPU ID to another GPU. btw, i use the builtin configuration.

Hi @Amycao,
thank you for your reply, I also try to check on my system. One suspicious on me, when I run with the second GPU. I don’t see it use as many resources as GPU 0. it similar to the video I provided for you above. Do you think that’s can be the problem in my situation

Did you enable sink2 rtspstreaming? I see from your previous pasted configuration.

[sink2]
enable=1
type=4

yes, I enabled the rstpstreaming. Is that effect to the saving image process

Please disable it and give a try.
in rtspstreaming processing, there will be nvvideoconvert components, which will default to use GPU 0, that’s why you see GPU utility setting GPU ID to 1 less then on 0

hi @Amycao, thank you for your reply.
I think that’s may not problem cause. Because with transfer learning test, there is no sink 2 config, but it still throw back image as I showed you in video demo. Anyway, I will test on deepstream-test5 and report for you

I think that’s may not problem cause. Because with transfer learning test, there is no sink 2 config, but it still throw back image as I showed you in video demo. Anyway, I will test on deepstream-test5 and report for you

But you said you enabled rtspstreaming
btw, configuration in comment 14 used for which sample? it have rtspstreaming enable.

Hi @Amycao, So sorry for making you confuse about my answer.
The config file in comment 14 and before that is from deepstream-test5.
The config file in comment 17 is from deepstream-transfer-learning
So when you asked me about enabled rstpstreaming I thought you are talking about deepstream-test5 so I said yes ( My mistake)
Because I want to provide for you the problem not come from custom source code deepstream-test5 but It also happened with another origin deppstream-app (in here I took deepstream-transfer-learning as an example)

Hi,
Sorry for the late.
Is this still be an issue?

hi @Amycao,
thank you for your reply. After applying your suggestion and with your code, It will work with the callback function ur modify in deepstream_app.c

static GstPadProbeReturn
gie_primary_processing_done_buf_prob (GstPad * pad, GstPadProbeInfo * info,
    gpointer u_data)
{
  GstBuffer *buf = (GstBuffer *) info->data;
    GstMapInfo inmap = GST_MAP_INFO_INIT;
  if (!gst_buffer_map (buf, &inmap, GST_MAP_READ)) {
    GST_ERROR ("input buffer mapinfo failed");
    return GST_FLOW_ERROR;
  }
  NvBufSurface *ip_surf = (NvBufSurface *) inmap.data;
  gst_buffer_unmap (buf, &inmap);

  NvDsObjectMeta *obj_meta = NULL;
  guint vehicle_count = 0;
  guint person_count = 0;
  NvDsMetaList *l_frame = NULL;
  NvDsMetaList *l_obj = NULL;
  AppCtx *appCtx = (AppCtx *) u_data;
  NvDsObjEncCtxHandle ctx = nvds_obj_enc_create_context ();
  
  NvDsBatchMeta *batch_meta = gst_buffer_get_nvds_batch_meta (buf);
  if (!batch_meta) {
    NVGSTDS_WARN_MSG_V ("Batch meta not found for buffer %p", buf);
    return GST_PAD_PROBE_OK;
  }
  for (l_frame = batch_meta->frame_meta_list; l_frame != NULL;
      l_frame = l_frame->next) {
    NvDsFrameMeta *frame_meta = (NvDsFrameMeta *) (l_frame->data);
    guint num_rects = 0;
    for (l_obj = frame_meta->obj_meta_list; l_obj != NULL; l_obj = l_obj->next) {
      obj_meta = (NvDsObjectMeta *) (l_obj->data);
      if (obj_meta->class_id == PGIE_CLASS_ID_VEHICLE) {
        vehicle_count++;
        num_rects++;
      }
      if (obj_meta->class_id == PGIE_CLASS_ID_PERSON) {
        person_count++;
        num_rects++;
      }
      /* Conditions that user needs to set to encode the detected objects of
       * interest. Here, by default all the detected objects are encoded.
       * For demonstration, we will encode the first object in the frame */
      if ((obj_meta->class_id == PGIE_CLASS_ID_PERSON
              || obj_meta->class_id == PGIE_CLASS_ID_VEHICLE)
          && num_rects == 1) {
        NvDsObjEncUsrArgs userData = { 0 };
        /* To be set by user */
        userData.saveImg = save_img;
        userData.attachUsrMeta = attach_user_meta;
        /* Set if Image scaling Required */
        userData.scaleImg = FALSE;
        userData.scaledWidth = 0;
        userData.scaledHeight = 0;
        /* Preset */
        userData.objNum = num_rects;
        /*Main Function Call */
        nvds_obj_enc_process (ctx, &userData, ip_surf, obj_meta, frame_meta);
      }
    }
  }
  nvds_obj_enc_finish (ctx);  

  write_kitti_output (appCtx, batch_meta);

  return GST_PAD_PROBE_OK;
}

but it will fail when called handler again in bbox_generated_probe_after_analytics in deepstream_test5_app_main.c file. So I guess that. Because in your first call. The memory still in GPU 1 so It’s is extract and saving to disk but when recalled buffer again in the deepstream_test5_app_main the callback now is using GPU 0 so it cannot extract the memory. Does my thinking is right? Also I checked that, if I set gpu_id = 1 in config file. The application still using GPU 0 in somewhere. So I think this may cause problem
I’m really tired of this problem so please help me to fix it or explaining to me what I’m dealing to.
Thank you for your support

Hi @hung ,
From the video you attached, looks when you run your program on GPU#1, there are still part of the application running on GPU#0, which may cause the issue.
So, when you want to run the program on GPU#1, could you juts use CUDA_VISIABLE_DEVICES=1 based the application with “gpu-id=0”, i.e.

$ CUDA_VISIABLE_DEVICES=1 ./your_application

The possible reason is: in your application, if you swap a new CPU thread, the thread by default runs on GPU#0, so you need to call cudaSetDevice() to set the GPU id for the thread, but it’s easy for us to forget to set the GPU id in thread, so the easiest way is, as I mentioned above, using “CUDA_VISIABLE_DEVICES”.

Please try deepstream image meta test app for this with multi GPU. If it works then issue is at your end and you should debug.
or you can try Martin’s suggestion, it works on my side.

Hi @mchi and @Amycao, than you for your reply. I will try and update status for you