Why the procedure caused free pointer error for deepstream_paraller_inference_app code

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
GPU with Tesla T4
• DeepStream Version
Deepstream 6.2
• TensorRT Version
tensorrt 8.5.2.2
• NVIDIA GPU Driver Version (valid for GPU only)
Driver Version : 515.86.01
• Issue Type( questions, new requirements, bugs)
I try to add the function of saving original images while decoding the stream for further usage. And the function is activated successfully on Jetson Xaiver NX board.
But I transfer it to x86 platform, with docker container(base images: nvcr.io/nvidia/deepstream: 6.2-devel), it would cause an error : free(): invalid pointer Aborted (core dumped). And an core file would be got.
I used the gdb tools to check the details, and got the following informations:


I seems the error was caused by the message?

To reproduce the error, I herein attached the code of deepstream_paraller_app.cpp and the setting yaml file for your reference.
deepstream_app_config.zip (15.6 KB)

I’m not clear why the code cause such error…

Is there any suggestions for this issue~~ Hope any reply~

Sorry for the long delay.

I think it may be related to the part of code you deleted. Try adding initialization code in generate_event_msg_meta

generate_event_msg_meta
{
........
meta->extMsg = NULL;
meta->extMsgSize = 0;
}

Thanks for your reply.
I checked the original code of generate_event_msg_meta, it seems do not contain these two lines:


And I also add the code as your suggest, still have same error.

It is abnormal that I use same code on Jetson Xaiver nx board, it can work fine…But got invalid pointer on x86 GPU platform.

To aviod misunderstand, here I provide more information for your reference;

  1. I add the func of save_frame_as_jpeg to save each frame when decoding from the video stream for further usage; — the func would be activated with independent save_thread shown as below:

  2. I add the confidence data to the display_text in the func of osd_sink_pad_buffer_probe, shown as below:

I’m not clear whether such modifications cause the error. Hope your reply, thanks!

1.If you want to save a frame or object, using nvds_obj_enc_process is recommended. This does not require the use of opencv and copies the data to the CPU, which has better performance.

The complete example is at:

/opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-image-meta-test/deepstream_image_meta_test.c

2.Is the stack for each crash consistent?

You can recompile libnvds_kafka_proto.so and use the symbol library to debug the problem.

/opt/nvidia/deepstream/deepstream/sources/libs/kafka_protocol_adaptor

Thanks for your reply.

  1. For the saving frame, It seems to be set at the stage of osd_sink_pad_buffer_probe. Is it possible to save the decode frame for each source? I think I had tested before and got a mix-picture of sources? I want to save every frame for each source for further usage.
  2. For the crash, I debug each core file, it seems the same reason shown before. I feel confused that why the same code can be run sucessfully on Jetson NX board, but got the error at X86 GPU?

nvds_obj_enc_process can save each frame in the batch as an image, so as long as you save it downstream of nvstreammux, it will be OK

I’m not sure what the problem is, on Jetson there is no crash maybe a different way of memory management,Noticed you are using DS-6.2, could you try the newer version?

I’m a greenhand. Is there any demo that I can check how to save the decode frame fom nvstreammux for each source independently?

I just test the code on DS-7.1, it seems some plugins changed, which cause more errors… such as file of libnvparsers.so not found …

Add a probe function to the nvstreammux src pad, and then use nvds_obj_enc_process to save all frames in the batch as images.

/opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-image-meta-test/deepstream_image_meta_test.c is a complete example

for (l_frame = batch_meta->frame_meta_list; l_frame != NULL;
      l_frame = l_frame->next) {
    NvDsFrameMeta *frame_meta = (NvDsFrameMeta *) (l_frame->data);
    /* For demonstration purposes, we will encode the first 10 frames. */
    if(frame_count <= 10) {
      NvDsObjEncUsrArgs frameData = { 0 };
      /* Preset */
      frameData.isFrame = 1;
      /* To be set by user */
      frameData.saveImg = save_img;
       /* set the file name */
      // frameData.fileNameImg
      frameData.attachUsrMeta = attach_user_meta;
      /* Set if Image scaling Required */
      frameData.scaleImg = FALSE;
      frameData.scaledWidth = 0;
      frameData.scaledHeight = 0;
      /* Quality */
      frameData.quality = 80;
      /* Set to calculate time taken to encode JPG image. */
      if (calc_enc) {
        frameData.calcEncodeTime = 1;
      }
      /* Main Function Call */
      nvds_obj_enc_process (ctx, &frameData, ip_surf, NULL, frame_meta);
    }

Modify the makefile of your project. DS-7.1 has upgraded to TRT-10.3, which is no longer supported by this library.

Thanks for your reply. I would check it later.