I have Process 1 reading the camera using the following pipeline:
gst-launch-1.0 nvv4l2camerasrc device=/dev/video0 name=src1 ! ‘video/x-raw(memory:NVMM),width=3840,height=2160,format=UYVY,framerate=30/1’ ! nvvidconv interpolation-method=4 ! ‘video/x-raw(memory:NVMM),width=1920,height=1080,format=UYVY,framerate=30/1’ ! nvvidconv interpolation-method=4 ! ‘video/x-raw,format=BGRx,width=1920,height=1080’ ! videoconvert ! ‘video/x-raw, format=(string)xRGB’ ! identity drop-allocation=true ! v4l2sink device=/dev/video4 sync=false
Process 2 then receives this video stream by using:
gst-launch-1.0 nvv4l2camerasrc device=/dev/video0 name=src1 ! ‘video/x-raw(memory:NVMM),width=3840,height=2160,format=UYVY,framerate=30/1’ ! nvvidconv interpolation-method=4 ! ‘video/x-raw(memory:NVMM),width=1920,height=1080,format=UYVY,framerate=30/1’ ! nvvidconv interpolation-method=4 ! ‘video/x-raw,format=BGRx,width=1920,height=1080’ ! videoconvert ! ‘video/x-raw, format=(string)xRGB’ ! identity drop-allocation=true ! v4l2sink device=/dev/video4 sync=false
I am using v4l2loopback, but it consumes a lot of CPU when video node 4 is accessed by multiple processes. My question is: Is there a more efficient way to share video between processes with minimal resource consumption? For example, methods like zero copy, using Jetson multiple API libraries, or leveraging GPU or hardware engines? I am currently using Jetson Xavier NX