Nvinfer Gstreamer plugin is updating/adding metadata on read-only buffers

• Hardware Platform (GPU)
• DeepStream Version 6.1.1 from docker image nvcr.io/nvidia/deepstream:6.1.1-devel
• JetPack Version (-)
• TensorRT Version(8.4.1.5)
• NVIDIA GPU Driver Version (520.61.05)
• Issue Type( questions)

I’m modifying nvinfer with some custom functionality and encountered potential problems with nvinfer plugin implementation. The problem is that the nvinfer plugin is mapping buffers as read-only and does not check if they are writable with gst_buffer_is_writable function see gstnvinfer.cpp gst_nvinfer_submit_input_buffer function. In Gstreamer documentation for the gst_buffer_is_writable function they clearly state that metadata can be modified only when the buffer is writable see: Gstreamer documentation . This could cause problems when two plugins are trying to access/modify the same buffer metadata at the same time. This could happen when using elements such as tee. Is this a known issue or is there something that I missed?

Code snippet from gstnvinfer.cpp gst_nvinfer_submit_input_buffer mapping buffer as read-only:

  /* Map the buffer contents and get the pointer to NvBufSurface. */
  if (!gst_buffer_map (inbuf, &in_map_info, GST_MAP_READ)) {
    return GST_FLOW_ERROR;
  }
  in_surf = (NvBufSurface *) in_map_info.data;

  nvds_set_input_system_timestamp(inbuf, GST_ELEMENT_NAME(nvinfer));

  if (nvinfer->input_tensor_from_meta) {
   flow_ret = gst_nvinfer_process_tensor_input (nvinfer, inbuf, in_surf);
  } else if (nvinfer->process_full_frame) {
   flow_ret = gst_nvinfer_process_full_frame (nvinfer, inbuf, in_surf);
  } else {
    flow_ret = gst_nvinfer_process_objects (nvinfer, inbuf, in_surf);
  }

  /* Unmap the input buffer contents. */
  if (in_map_info.data)
    gst_buffer_unmap (inbuf, &in_map_info);

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)

• DeepStream Version

• JetPack Version (valid for Jetson only)

• TensorRT Version

• NVIDIA GPU Driver Version (valid for GPU only)

• Issue Type( questions, new requirements, bugs)

• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)

• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

I added the info but it’s a question about general implementation of nvinfer plugin not platform specific.

could you share your use case or media pipeline? did you meet any actual problem? as you know, most of use cases are cascaded, It is rare for two plugins to try to access/modify the same metadata at the same time. if want to do inference parallelly, please refer to deepstream_parallel_inference_app

Thank you for the deepstream_parallel_inference_app did not know about that or the metamux plugin.

My use case is that I wanted to use two nvinfer plugins in parallel after nvstreammux the example made it more clear to me how running parallel pipelines should be handled. But I still think that an error should be thrown when nvinfer get’s read-only buffer and try to modify metadata. Probably I’m not the only one who will try to use the pipeline in such a way as I mentioned above.

And there are also other small things I encountered:

  • Buffers are unmapped in gst_nvinfer_submit_input_buffer but the content of the buffer is used in other threads even after unmapping buffer
  • And writing and reading from variables in different threads without mutex like nvinfer->last_flow_ret variable

These things do not crash the pipeline most of the time but certainly, it’s not according to best practices.

there are gst_buffer_map and gst_buffer_unmap in gst_nvinfer_submit_input_buffer, I did not see “the buffer is used in other threads even after unmapping buffer”.

  1. please find process_cond, some threads sync by this condition variable. 2. if classifier_async_mode is true, last_flow_ret will not be used in thread gst_nvinfer_output_loop.

content of the buffer device memory which holds frames is used for scaling which is called asynchronously in the gst_nvinfer_process_* functions called from gst_nvinfer_submit_input_buffer and the scaling operation is synchronized in input_queue_thread thread which means the content of the mapped memory could be used after unmapping. In Gstreamer documentation they state: " Getting access to the data of the memory is performed with gst_memory_map. The call will return a pointer to offset bytes into the region of memory. After the memory access is completed, gst_memory_unmap should be called." But it does not matter if nvinfer will be used “cascade” way as you mention.

last_flow_ret is is returned by gst_nvinfer_generate_output which is not synchronized with output_thread. gst_nvinfer_generate_output function is called right after gst_nvinfer_submit_input_buffer returns. See basetransform.c

thanks for the sharing, last_flow_ret exists in the three functions, gst_nvinfer_submit_input_buffer , gst_nvinfer_generate_output and gst_nvinfer_output_loop.
if classifier_async_mode is true, last_flow_ret will only take effect in function gst_nvinfer_submit_input_buffer and gst_nvinfer_generate_output because batch->push_buffer in gst_nvinfer_output_loop is false.
if classifier_async_mode is false, last_flow_ret will take effect in function gst_nvinfer_generate_output and gst_nvinfer_output_loop. last_flow_ret is got in gst_nvinfer_output_loop while read in gst_nvinfer_generate_output.

we recommend to use deepstream_parallel_inference_app mentioned above to do inference parallelly. metamux will mux the multiple branch’s metadata. please refer to README.md for more explanations.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.