NVMM Memory Pipeline Encoders Broken When Coexisting with bufapi=true DeepStream Pipeline

Hardware Platform - Jetson
• JetPack Version - 32.4
• DeepStream Version - 6.0
• VPI Version - 1.2
• Issue Type - bug

Hello NVIDIA Developers,
I am encountering an issue when running two pipelines with differing bufapi configurations:

  • One pipeline uses NVMM memory with bufapi=false. This pipeline includes encoders and decoders.
  • The other pipeline is a DeepStream pipeline using bufapi=true.

The issue arises when both pipelines are active: the encoders and decoders in the bufapi=false NVMM pipeline fail to work correctly. I have to resort to using OMX encoders for streaming, but to make this pipeline functional, which is not ideal.

My questions are:

  1. Why does the presence of a DeepStream pipeline with bufapi=true affect the functionality of encoders and decoders in a separate bufapi=false pipeline?
  2. Are there any known compatibility issues or restrictions when using these configurations concurrently?
  3. Is there a recommended solution or workaround to ensure both pipelines can coexist without breaking functionality?

Any insights, suggestions, or guidance would be greatly appreciated.

Thank you!

Can you upgrade the JetPack 5.1.3(35.5) and DeepStream 6.3?

What is your pipeline includes encoders and decoders?

No, I am limited to using JP 4.6.2 and Deepstream 6.0.
The encoders used in the application are nvv4l2h264enc, which was replaced with omxh264enc, because I was not able to resolve the issue with differing bufapi pipelines.
Now I want to decode an images, which will be used for input of the pipelines, and nvjpegdec with Deepstream = False is behaving the same way was nvv4l2h264enc.

Example pipeline would be
multifilesrc location=exampleImage.jpg caps="image/jpeg, width=(int)X, height=(int)Y, framerate=25/1" ! nvjpegdec DeepStream=false ! "video/x-raw(memory:NVMM) ,format=RGBA, width=(int)X,height=(int)Y ! appsink
In the appsink we are Modifying the buffer using VPI and pushing it (As a Surface) to the Detector. Example pipeline for the Detector would be.
appsrc ! video/x-raw(memory:NVMM), format=RGBA, width=(int)X, height=(int)Y, framerate=1/1 ! nvvideoconvert output-buffers=8 compute-hw=0 nvbuf-memory-type=4 ! "video/x-raw(memory:NVMM), format=NV12, width=(int)X height=(int)Y" ! MUX.sink_0 nvstreammux name=MUX batch-size=1 width=X height=Y batched-push-timeout=4000 compute-hw=2 nvbuf-memory-type=4 ! nvinfer name=INFER config-file-path="pgie_config.txt" ! fakesink sync=false

What kind of Surface? Why do you use nvvideoconvert while using “bufapi=false” with the decoder/encoder?

The first pipeline ( multifilesrc) is with disabled bufapi(Deepstream = False), and I do not see any nvvideoconvert in it.

The second pipeline is using bufapi=True. That’s why I am using nvvideoconvert. The input buffer is Surface (NVBUF_MEM_SURFACE_ARRAY). Which was created with gst_nvds_buffer_pool_new().

As you said

Please tell us what kind of surface do you use.

In the following pipeline, you use nvvideoconvert in the pipeline, so it is a DeepStream pipeline

But with the sender(appsink) pipeline, you disable DeepStream flag with nvjpegdec, so it is a Jetson GStreamer pipeline. The two types pipelines can’t work together.

DeepStream SDK FAQ - Intelligent Video Analytics / DeepStream SDK - NVIDIA Developer Forums

The input buffer for the deepstream pipeline (starting with appsrc), is GstBuffer of NvBufSurface, which is generated by the following pool:

	GstStructure *config = gst_buffer_pool_get_config(pool);
	gst_buffer_pool_config_set_params (.......);
	gst_structure_set(config,
		"memtype", G_TYPE_INT, NVBUF_MEM_SURFACE_ARRAY,
		"gpu-id", G_TYPE_UINT, 0,
		"batch-size", G_TYPE_UINT, 1, 
		nullptr
	);

The pipelines are functioning correctly because I modify the buffer from non-surface to surface. Below is the source pipeline (the pipeline pushing buffers to the DeepStream pipeline), which works as expected:

nvarguscamerasrc name=CameraSrc sensor-id=0 bufapi-version=FALSE  ! tee name=Producer 
	Producer. ! queue name=GetRgba leaky=1 ! nvvidconv ! video/x-raw(memory:NVMM), format=RGBA, width=(int)2448, height=(int)2048, framerate=25/1 ! appsink name=GetRgbaSink sync=false async=true 
	Producer. ! queue max-size-buffers=12 name=STREAMING leaky=1 ! nvvidconv !  video/x-raw(memory:NVMM), format=NV12, width=2448, height=2048, framerate=25/1 ! omxh264enc name=EncoderTcpStreaming bitrate=1000000 ! mpegtsmux ! udpsink host=0.0.0.0 port=5005 sync=false async=true

However, when I replace the encoder with nvv4l2h265enc, the pipeline crashes, producing the following logs and ultimately a segmentation fault:

0:00:15.841251996 22268   0x5586c65320 INFO               GST_EVENT gstevent.c:814:gst_event_new_caps: creating caps event video/x-h265, stream-format=(string)byte-stream, alignment=(string)au, profile=(string)NULL, width=(int)2448, height=(int)2048, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction)25/1, interlace-mode=(string)progressive, colorimetry=(string)bt709, chroma-site=(string)mpeg2
0:00:15.841367681 22268   0x5586c65320 INFO                    v4l2 gstv4l2object.c:3165:gst_v4l2_object_setup_pool:<EncoderTcpStreaming:src> accessing buffers via mode 2
0:00:15.841519304 22268   0x5586c65320 INFO          v4l2bufferpool gstv4l2bufferpool.c:825:gst_v4l2_buffer_pool_set_config:<EncoderTcpStreaming:pool:src> increasing minimum buffers to 2
0:00:15.841540265 22268   0x5586c65320 INFO          v4l2bufferpool gstv4l2bufferpool.c:838:gst_v4l2_buffer_pool_set_config:<EncoderTcpStreaming:pool:src> reducing maximum buffers to 64
0:00:15.841555210 22268   0x5586c65320 INFO          v4l2bufferpool gstv4l2bufferpool.c:849:gst_v4l2_buffer_pool_set_config:<EncoderTcpStreaming:pool:src> can't allocate, setting maximum to minimum
0:00:15.841606348 22268   0x5586c65320 INFO          v4l2bufferpool gstv4l2bufferpool.c:838:gst_v4l2_buffer_pool_set_config:<EncoderTcpStreaming:pool:src> reducing maximum buffers to 64
0:00:15.841630285 22268   0x5586c65320 INFO          v4l2bufferpool gstv4l2bufferpool.c:849:gst_v4l2_buffer_pool_set_config:<EncoderTcpStreaming:pool:src> can't allocate, setting maximum to minimum
NvMMLiteOpen : Block : BlockType = 8 
===== NVMEDIA: NVENC =====
NvMMLiteBlockCreate : Block : BlockType = 8 
0:00:15.843520451 22268   0x5586c65320 INFO                    v4l2 gstv4l2object.c:4039:gst_v4l2_object_set_format_full:<EncoderTcpStreaming:sink> Set output framerate to 25/1
0:00:15.843552420 22268   0x5586c65320 INFO                    v4l2 gstv4l2object.c:3165:gst_v4l2_object_setup_pool:<EncoderTcpStreaming:sink> accessing buffers via mode 5
0:00:15.843682250 22268   0x5586c65320 INFO          v4l2bufferpool gstv4l2bufferpool.c:825:gst_v4l2_buffer_pool_set_config:<EncoderTcpStreaming:pool:sink> increasing minimum buffers to 2
0:00:15.843728076 22268   0x5586c65320 INFO          v4l2bufferpool gstv4l2bufferpool.c:832:gst_v4l2_buffer_pool_set_config:<EncoderTcpStreaming:pool:sink> increasing minimum buffers to 4
0:00:15.843763886 22268   0x5586c65320 INFO          v4l2bufferpool gstv4l2bufferpool.c:838:gst_v4l2_buffer_pool_set_config:<EncoderTcpStreaming:pool:sink> reducing maximum buffers to 64
0:00:15.843782575 22268   0x5586c65320 INFO          v4l2bufferpool gstv4l2bufferpool.c:849:gst_v4l2_buffer_pool_set_config:<EncoderTcpStreaming:pool:sink> can't allocate, setting maximum to minimum
0:00:15.843797455 22268   0x5586c65320 INFO          v4l2bufferpool gstv4l2bufferpool.c:854:gst_v4l2_buffer_pool_set_config:<EncoderTcpStreaming:pool:sink> adding needed video meta
0:00:15.844372489 22268   0x5586c65320 WARN          v4l2bufferpool gstv4l2bufferpool.c:1087:gst_v4l2_buffer_pool_start:<EncoderTcpStreaming:pool:src> Uncertain or not enough buffers, enabling copy threshold
GST_ARGUS: Creating output stream
CONSUMER: Waiting until producer is connected...
GST_ARGUS: Available Sensor modes :
GST_ARGUS: 2448 x 2058 FR = 29.999999 fps Duration = 33333334 ; Analog Gain range min 0.000001, max 48.000000; Exposure Range min 30000, max 660000000;

GST_ARGUS: Running with following settings:
   Camera index = 1 
   Camera mode  = 0 
   Output Stream W = 2448 H = 2058 
   seconds to Run    = 0 
   Frame Rate = 29.999999 
GST_ARGUS: Sensor Timestamps Enabled
GST_ARGUS: Setup Complete, Starting captures for 0 seconds
GST_ARGUS: Starting repeat capture requests.
CONSUMER: Producer has connected; continuing.
0:00:16.839911416 22268   0x5573855200 INFO        GST_ELEMENT_PADS gstelement.c:920:gst_element_get_static_pad: found pad CameraSrc:src
0:00:16.842933569 22268   0x55858ab2d0 INFO               GST_EVENT gstevent.c:895:gst_event_new_segment: creating segment event time segment start=0:00:00.000000000, offset=0:00:00.000000000, stop=99:99:99.999999999, rate=1.000000, applied_rate=1.000000, flags=0x00, time=0:00:00.000000000, base=0:00:00.000000000, position 0:00:00.000000000, duration 99:99:99.999999999
GST_ARGUS: NvArgusCameraSrc: Setting Exposure Time Range : 5000000 5000000
GST_ARGUS: NvArgusCameraSrc: Setting Gain Range : 5 5
GST_ARGUS: NvArgusCameraSrc: Setting ISP Digital Gain Range : 1 1
0:00:16.846168500 22268   0x55858ab2d0 INFO                 basesrc gstbasesrc.c:2945:gst_base_src_loop:<CameraSrc> marking pending DISCONT
0:00:16.846746574 22268   0x5586c65230 INFO                    task gsttask.c:457:gst_task_set_lock: setting stream lock 0x558589fb50 on task 0x7e5c008cb0
0:00:16.846810545 22268   0x5586c65230 INFO                GST_PADS gstpad.c:6154:gst_pad_start_task:<EncoderTcpStreaming:src> created task 0x7e5c008cb0
Segmentation fault

I’ve tried with the JetPack 6.1 GA and DeepStream 7.1 GA on the Orin board. The nvarguscamerasrc+appsink pipeline will freeze after several seconds. We will investigate the issue. For your case on DeepStream 6.0.1, you may use the pipeline which works with omxh264enc.

So, you are experiencing it too.
I tough there would be some define to set the memory type for the encoders, so it does not conflict with Deepstream.