Get two video streams of different resolutions from a single camera with NVIDIA Gstreamer

Hi,

I have a CSI camera which can capture images at a high resolution (4032 x 3040). Currently, I am capturing the images through OpenCV Videocapture object and then resizing them to a lower resolution (640 x 640).

Because there is a slight delay in the resizing operation with OpenCV, I want to use Gstreamer to scale the input image and send the high-res and low-res image to two different virtual video devices (/dev/video*) created using v4l2loopback.

I used the below command to do this:

gst-launch-1.0 nvarguscamerasrc gainrange='3 3' ispdigitalgainrange='3 3' exposuretimerange='1000000 1000000' ! 'video/x-raw(memory:NVMM), width=(int)4032, height=(int)3040, format=(string)NV12, framerate=(fraction)30/1' ! nvvidconv flip-method=0 ! tee name=mytee ! queue ! v4l2sink device=/dev/video1 mytee. ! videoconvert ! videoscale ! 'video/x-raw, width=(int)640, height=(int)640' ! v4l2sink device=/dev/video2

The problem here is that when I view the output of the two devices, both of them are at lower resolution of 640 x 640. This is the command I use to view the stream:

gst-launch-1.0 v4l2src device=/dev/video1 ! xvimagesink – similarly for /dev/video2.

I can’t figure out why I am getting both virtual streams in lower resolution.

Also, if there is any other way to get images at different resolutions, please feel free to suggest. Thank you!

Hi,
You can use nvvidconv for downscaling to 640x480. The pipeline of showing preview is like:

gst-launch-1.0 nvarguscamerasrc gainrange='3 3' ispdigitalgainrange='3 3' exposuretimerange='1000000 1000000' ! 'video/x-raw(memory:NVMM), width=(int)4032, height=(int)3040, format=(string)NV12, framerate=(fraction)30/1' ! nvvidconv ! 'video/x-raw(memory:NVMM),width=640,height=480' ! nvoverlaysink

Please refer to above pipeline and apply to your usecase.

Hi,

Thank you for your reply! With the pipeline you suggested, I can get only images at lower resolution. The reason I used v4l2loopback was to create two virtual video streams where one would be at high resolution (4032x3040) and other at low resolution (640x640).

I modified the pipeline I was using according to your suggestion to use nvvidconv:

gst-launch-1.0 nvarguscamerasrc gainrange='3 3' ispdigitalgainrange='3 3' exposuretimerange='1000000 1000000' ! 'video/x-raw(memory:NVMM), width=(int)4032, height=(int)3040, format=(string)NV12, framerate=(fraction)30/1' ! nvvidconv flip-method=0 ! tee name=mytee ! queue ! v4l2sink device=/dev/video1 mytee. ! nvvidconv ! 'video/x-raw(memory:NVMM), width=640, height=640' ! nvvidconv ! v4l2sink device=/dev/video2

However, I get this error now:

ERROR: from element /GstPipeline:pipeline0/GstNvArgusCameraSrc:nvarguscamerasrc0: Internal data stream error. Additional debug info: gstbasesrc.c(3055): gst_base_src_loop (): /GstPipeline:pipeline0/GstNvArgusCameraSrc:nvarguscamerasrc0: streaming stopped, reason not-negotiated (-4)

Hi,
You may try:

$ gst-launch-1.0 nvarguscamerasrc gainrange='3 3' ispdigitalgainrange='3 3' exposuretimerange='1000000 1000000' ! 'video/x-raw(memory:NVMM), width=(int)4032, height=(int)3040, format=(string)NV12, framerate=(fraction)30/1' ! nvvidconv flip-method=0 ! video/x-raw,width=4032,height=3040 ! tee name=mytee ! queue ! v4l2sink device=/dev/video1 mytee. ! nvvidconv ! 'video/x-raw(memory:NVMM),width=640,height=480' ! nvvidconv ! video/x-raw ! v4l2sink device=/dev/video2

Hi,

Thanks for your reply! I get the same error again.

ERROR: from element /GstPipeline:pipeline0/GstNvArgusCameraSrc:nvarguscamerasrc0: Internal data stream error. Additional debug info: gstbasesrc.c(3055): gst_base_src_loop (): /GstPipeline:pipeline0/GstNvArgusCameraSrc:nvarguscamerasrc0: streaming stopped, reason not-negotiated (-4)

Some additional information if that helps:

Camera: ArduCam IMX477 B0250 (https://www.arducam.com/docs/camera-for-jetson-nano/native-jetson-cameras-imx219-imx477/imx477/)
JetPack Version: 4.4.1

You would use identity drop-allocation=true before v4l2sink in case of v4l2loopback.
Note that v4l2loopback nodes may result in significant CPU usage.
Try (here using BGRx):

gst-launch-1.0 nvarguscamerasrc gainrange='3 3' ispdigitalgainrange='3 3' exposuretimerange='1000000 1000000' ! 'video/x-raw(memory:NVMM), width=(int)4032, height=(int)3040, format=(string)NV12, framerate=(fraction)30/1' ! nvvidconv flip-method=0 ! 'video/x-raw(memory:NVMM)' ! tee name=mytee ! queue ! nvvidconv ! video/x-raw, format=BGRx !  identity drop-allocation=1 ! v4l2sink device=/dev/video1        mytee. ! queue ! nvvidconv ! 'video/x-raw(memory:NVMM),width=640,height=480' ! nvvidconv ! video/x-raw, format=BGRx ! identity drop-allocation=1 ! v4l2sink device=/dev/video2
1 Like

Thank you!