Frame loss using multi omx-encoding instances using NVIDIA memory

Hi all,

I have been trying to make h265 encoding with six gstreamer pipeline simultaneously with these characteristics.

  • Source: videotestsrc element.
  • Resolution: 640x480
  • Framerate: 60fps
  • Jetpack 3.1

*Pipeline with encoding
gst-launch-1.0 videotestsrc is-live=true ! ‘video/x-raw,width=640,height=480,framerate=60/1,format=NV12’ ! nvvidconv ! ‘video/x-raw(memory:NVMM),format=I420’ ! omxh265enc control-rate=2 bitrate=16000000 preset-level=0 ! fakesink sync=false async=false &

However I have seen after three gstreamer pipelines, the frame-rate in each one drops dramatically until 40fps. For each added gstreamer pipeline, the frame-rate drops even more until 10 fps.

This issue only appears when the omx264enc elements are into the gstreamer pipelines, Otherwise the six gstreamer pipeline without encoding keep the framerate stable on 60fps.

  • Pipeline without encoding
    gst-launch-1.0 videotestsrc is-live=true ! ‘video/x-raw,width=640,height=480,framerate=60/1,format=NV12’ ! nvvidconv ! fakesink sync=false async=false &

On the other side, I have also been able to identify that the issue only happens using the NVIDIA buffers provided by nvvidconv element. If all the pipelines use userspace memory over whole process, the frame-rate is stable over all of them.

  • Pipeline with encoding no using NVIDIA buffers.
    gst-launch-1.0 videotestsrc is-live=true ! ‘video/x-raw,width=640,height=480,framerate=60/1’ ! perf name=cam0 ! omxh265enc control-rate=2 bitrate=16000000 preset-level=0 ! fakesink sync=false async=false &

Has anyone seen this issue using the omxh265enc using the NVIDIA buffers?
or help to debug this issue

Please refer to
https://devtalk.nvidia.com/default/topic/1012417/jetson-tx1/tx1-gstreamer-nvvidconv-will-not-pass-out-of-nvmm-memory/post/5162187/#5162187

In your pipeline, the bottleneck is the memcpy from video/x-raw to video/x-raw(memory:NVMM). What is the source in your usecase? Bayer sensors?