GStreamer - Multiple Camera Output

Hello.

I’m trying to both :

  • Save the camera video stream into a h264 file,
  • And retrieve the images in OpenCV for Python
    by using a GStreamer pipeline.

This is my code :

class Camera():
	
	def __init__(self, synchronizer = None):
	
		filename = "/home/nvidia/Videos/" + strftime("%Y-%m-%d_%H-%M-%S", gmtime()) + ".h264"
	
		gstreamer  = 'nvcamerasrc '
		gstreamer += "! video/x-raw(memory:NVMM), width=(int)640, height=(int)480, format=(string)I420, framerate=(fraction)24/1 "
		gstreamer += "! nvvidconv flip-method=2 "
		gstreamer += "! video/x-raw, format=(string)I420 "
		gstreamer += "! tee name=streams "
		
		gstreamer += " streams. "
		gstreamer += "! omxh264enc"
		gstreamer += "! 'video/x-h264, stream-format=(string)byte-stream'"
		gstreamer += "! filesink location=" + filename + " -e"
		
		gstreamer += "streams. "
		gstreamer += "! videoconvert "
		gstreamer += "! video/x-raw, format=(string)BGR "
		gstreamer += "! appsink "
		
	
		self.camera = cv2.VideoCapture(gstreamer)

And this is the result :

(python3:4811): GStreamer-CRITICAL **: gst_element_make_from_uri: assertion 'gst_uri_is_valid (uri)' failed

(python3:4811): GStreamer-WARNING **: Trying to link elements streams and omxh264enc-omxh264enc0 that don't share a common ancestor: omxh264enc-omxh264enc0 hasn't been added to a bin or pipeline, and streams is in pipeline0

Available Sensor modes :
2592 x 1944 FR=30.000000 CF=0x1109208a10 SensorModeType=4 CSIPixelBitDepth=10 DynPixelBitDepth=10
2592 x 1458 FR=30.000000 CF=0x1109208a10 SensorModeType=4 CSIPixelBitDepth=10 DynPixelBitDepth=10
1280 x 720 FR=120.000000 CF=0x1109208a10 SensorModeType=4 CSIPixelBitDepth=10 DynPixelBitDepth=10

NvCameraSrc: Trying To Set Default Camera Resolution. Selected 640x480 FrameRate = 24.000000 ...

GStreamer Plugin: Embedded video playback halted; module nvcamerasrc0 reported: Internal data flow error.
OpenCV Error: Unspecified error (GStreamer: unable to start pipeline
) in cvCaptureFromCAM_GStreamer, file /home/nvidia/Documents/OpenCV/opencv/modules/videoio/src/cap_gstreamer.cpp, line 818
Traceback (most recent call last):
  File "Maestro.py", line 186, in <module>
    main.run()
  File "Maestro.py", line 32, in run
    self.initialize_sensors()
  File "Maestro.py", line 70, in initialize_sensors
    self.camera = Camera(synchronizer = self.synchronizer)
  File "/home/nvidia/Documents/Maestro/python/Sources/Camera.py", line 35, in __init__
    self.camera = cv2.VideoCapture(gstreamer)
cv2.error: /home/nvidia/Documents/OpenCV/opencv/modules/videoio/src/cap_gstreamer.cpp:818: error: (-2) GStreamer: unable to start pipeline
 in function cvCaptureFromCAM_GStreamer

Any idea on how to do this?

You may try to turn each :

streams.

into

streams. ! queue

.

You can also add

--gst-debug=*:3

at the beginning of your pipeline for having more details (you can change level for more or less verbosity and specify a plugin only instead of *).

Thanks for your help @Honey_Patouceul. I tested different things and I solved the error by doing this :

gstreamer  = 'nvcamerasrc fpsRange="30.0 30.0" '
		gstreamer += "! video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)I420, framerate=(fraction)30/1 "
		gstreamer += "! nvvidconv flip-method=2 "
		gstreamer += "! video/x-raw(memory:NVMM), format=(string)I420 "
		gstreamer += "! tee name=streams "
		
		gstreamer += "! queue "
		gstreamer += "! omxh264enc "
		gstreamer += "! video/x-h264, stream-format=(string)byte-stream "
		gstreamer += "! filesink location=" + filename + " -e "
		
		gstreamer += "streams. "
		gstreamer += "! videoconvert "
		gstreamer += "! video/x-raw, format=(string)BGR "
		gstreamer += "! appsink "

		self.camera = cv2.VideoCapture(gstreamer)

But now, when I open the camera, cv2.VideoCapture constructor stuck on this :

Inside NvxLiteH264DecoderLowLatencyInitNvxLiteH264DecoderLowLatencyInit set DPB and MjstreamingInside NvxLiteH265DecoderLowLatencyInitNvxLiteH265DecoderLowLatencyInit set DPB and Mjstreaming
Available Sensor modes :
2592 x 1944 FR=30.000000 CF=0x1109208a10 SensorModeType=4 CSIPixelBitDepth=10 DynPixelBitDepth=10
2592 x 1458 FR=30.000000 CF=0x1109208a10 SensorModeType=4 CSIPixelBitDepth=10 DynPixelBitDepth=10
1280 x 720 FR=120.000000 CF=0x1109208a10 SensorModeType=4 CSIPixelBitDepth=10 DynPixelBitDepth=10

NvCameraSrc: Trying To Set Default Camera Resolution. Selected 1920x1080 FrameRate = 30.000000 ...

Framerate set to : 30 at NvxVideoEncoderSetParameterNvMMLiteOpen : Block : BlockType = 4
===== MSENC =====
NvMMLiteBlockCreate : Block : BlockType = 4
===== MSENC blits (mode: 1) into tiled surfaces =====

Any idea ?

Yes, you should not add ‘streams.’ after tee, as it is already its output. Sorry for missing that in my reply.

You may further try to add caps after queue:

queue ! video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)I420, framerate=(fraction)30/1

You may also add these after nvvidconv,
and add:

video/x-raw, format=(string)BGR, width=(int)1920, height=(int)1080, framerate=(fraction)30/1

after videoconvert.

This pipeline works fine in C++ with opencv-3.2.0 on my TX2:

const char* gst = " nvcamerasrc ! video/x-raw(memory:NVMM), format=(string)I420, width=(int)640, height=(int)480, framerate=(fraction)24/1 "
                  " ! nvvidconv flip-method=6 ! video/x-raw, format=(string)I420, width=(int)640, height=(int)480, framerate=(fraction)24/1  "
	          " ! tee name=t ! video/x-raw, format=(string)I420, width=(int)640, height=(int)480, framerate=(fraction)24/1 "
		  " ! queue ! video/x-raw, format=(string)I420, width=(int)640, height=(int)480, framerate=(fraction)24/1 "
		  " ! omxh264enc ! video/x-h264, stream-format=(string)byte-stream "  
 		  " ! filesink location=test.h264 "
		  " t. "
		  " ! queue ! video/x-raw, format=(string)I420, width=(int)640, height=(int)480, framerate=(fraction)24/1 "
		  " ! videoconvert ! video/x-raw, format=(string)BGR, width=(int)640, height=(int)480, framerate=(fraction)24/1 "
		  " ! appsink "

Ok that works ! Thank you so much @Honey_Patouceul !!

Nice to see it worked out.
In fact, setting caps for the input of appsink would probably be enough, but it’s harmless to set these everywhere and ensure it gets the way you think.