Questions about GStreamer RTSP for a monochrome camera

Hi

I’m trying to stream the video feed from my camera attached to the Jetson Nano. I’m kind of confused with how to use Gstreamer properly.

(note I have to use v4l2src over nvarguscamerasrc as my camera doesn’t support nvargus)
The first thing I tried to do was simply preview the video, which I was successful using

gst-launch-1.0 v4l2src device="/dev/video0" ! "video/x-raw, width=1280, height=400, format=GRAY8" ! nvvidconv ! 'video/x-raw(memory:NVMM), format=I420' ! nvoverlaysink sync=false

However, I’m quite confused why I have to convert the grayscale format to I420 before it can work with nvoverlaysink / autovideosink (why can’t I leave it in GRAY8?). This is because I plan to stream the video over RTSP, and would like to keep it in a grayscale format for better performance.

I would then like to publish this using RTSP, so I compiled test-launch as in the FAQ, and this pipeline works for me:

./test-launch "v4l2src ! video/x-raw, width=1280, height=400, framerate=20/1, format=GRAY8 ! nvvidconv ! video/x-raw(memory:NVMM), format=(string)I420 ! omxh264enc ! video/x-h264, stream-format=byte-stream, alignment=au ! h264parse ! rtph264pay name=pay0 pt=96"

Similar to my first question, how can I change this pipeline to only send 8-bit greyscale instead of 3 channels?

Thanks!

Here are the formats available on my camera:

(base) j@nano:~$ v4l2-ctl --list-formats-ext
ioctl: VIDIOC_ENUM_FMT
	Index       : 0
	Type        : Video Capture
	Pixel Format: 'GREY'
	Name        : 8-bit Greyscale
		Size: Discrete 2560x800
		Size: Discrete 2560x720
		Size: Discrete 1280x400
		Size: Discrete 640x200

	Index       : 1
	Type        : Video Capture
	Pixel Format: 'Y10 '
	Name        : 10-bit Greyscale
		Size: Discrete 2560x800
		Size: Discrete 2560x720
		Size: Discrete 1280x400
		Size: Discrete 640x200

	Index       : 2
	Type        : Video Capture
	Pixel Format: 'Y16 '
	Name        : 16-bit Greyscale
		Size: Discrete 2560x800
		Size: Discrete 2560x720
		Size: Discrete 1280x400
		Size: Discrete 640x200

and the output of gst-device-monitor-1.0 in case it’s helpful

(base) j@nano:~$ gst-device-monitor-1.0 
Device found:

	name  : vi-output, arducam-csi2 6-000c
	class : Video/Source
	caps  : video/x-raw, format=(string)GRAY8, width=(int)2560, height=(int)800, framerate=(fraction)[ 0/1, 2147483647/1 ];
	        video/x-raw, format=(string)GRAY8, width=(int)2560, height=(int)720, framerate=(fraction)[ 0/1, 2147483647/1 ];
	        video/x-raw, format=(string)GRAY8, width=(int)1280, height=(int)400, framerate=(fraction)[ 0/1, 2147483647/1 ];
	        video/x-raw, format=(string)GRAY8, width=(int)640, height=(int)200, framerate=(fraction)[ 0/1, 2147483647/1 ];
	        video/x-raw, format=(string)GRAY16_LE, width=(int)2560, height=(int)800, framerate=(fraction)[ 0/1, 2147483647/1 ];
	        video/x-raw, format=(string)GRAY16_LE, width=(int)2560, height=(int)720, framerate=(fraction)[ 0/1, 2147483647/1 ];
	        video/x-raw, format=(string)GRAY16_LE, width=(int)1280, height=(int)400, framerate=(fraction)[ 0/1, 2147483647/1 ];
	        video/x-raw, format=(string)GRAY16_LE, width=(int)640, height=(int)200, framerate=(fraction)[ 0/1, 2147483647/1 ];
	properties:
		udev-probed = true
		device.bus_path = platform-54080000.vi
		sysfs.path = /sys/devices/50000000.host1x/54080000.vi/video4linux/video0
		device.subsystem = video4linux
		device.product.name = "vi-output\,\ arducam-csi2\ 6-000c"
		device.capabilities = :capture:
		device.api = v4l2
		device.path = /dev/video0
		v4l2.device.driver = tegra-video
		v4l2.device.card = "vi-output\,\ arducam-csi2\ 6-000c"
		v4l2.device.bus_info = platform:54080000.vi:0
		v4l2.device.version = 264649 (0x000409c9)
		v4l2.device.capabilities = 2216689665 (0x84200001)
		v4l2.device.device_caps = 69206017 (0x04200001)
	gst-launch-1.0 v4l2src ! ...

Hi,
Gray format is not supported in H264/H265 encoding. For streaming, you would need to convert to YUV420(I420 or NV12).

Also gray format is not supported in nvoverlaysink plugin.

1 Like

It may not be a big problem to use YUV before encoding. If there is no color information in UV, these channels would probably be efficiently compressed.

Another way might be streaming FFV1 format that should preserve quality while reducing size.
It would be more complex, it wouldn’t be RTSP, you would have to manage a container (mkv may be ok) and udp or tcp connection. If trying this, on Jetson you may have to increase kernel socket write buffer maxsize, and may have to adjust kernel receive buffer maxsize on receiver side as well.
It may not be available with gstreamer, but maybe ffmpeg would be able to do that.