GStreamer camera to multiple sinks / pipelines

I am building a Jetson-based system which will take input from a camera (via nvarguscamerasrc) and feed it to multiple outputs:

  1. stream via RTSP server
  2. record to local video and/or still
  3. go into OpenCV for processing
    These tasks can start and stop independently, so I can’t seemingly use a tee to splice them all into one pipeline.
    Right now, I use intervideosink and intervideosrc to connect multiple pipelines, and it works, but is very inefficient as intervideosrc/sink doesn’t work with NVMM memory.
    Is there a way to accomplish this and keeping the data in NVMM memory as long as possible?
    Some thoughts are:
  4. Is there an NVIDIA port of intervideosink/src that works on NVMM memory?
  5. Can a single camera be accessed simultaneously in different pipelines using nvarguscamerasrc?
  6. Is there some other approach that isn’t obvious?


If you would like to have multiple sinks in single pipeline, please use tee plugin. If you would like to have camera source in one process and access it it other processes, you may try to set up UDP streaming. Please check
Gstreamer TCPserversink 2-3 seconds latency - #5 by DaneLLL

Since frame buffer has to be CPU buffer in BGR format in OpenCV, you would need to copy NVMM buffer to CPU buffer.

I’ll try using the tee plugin and switch in and out sinks dynamically using the method here: Pipeline manipulation

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.