Unable to use the correct GStreamer pipeline for e-CAM130_CUXVR with Jetson AGX Xavier in OpenCV Python

Hello,

I have recently integrated e-CAM130_CUXVR with Jetson Xavier and I am trying to open the camera and start the video using OpenCV. Below are some of the pipeline’s that I have tried:

1. cap = cv2.VideoCapture("nvarguscamerasrc ! video/x-raw(memory:NVMM), width=2592, height=1944, "
                            "format=NV12, framerate=30/1 ! nvvidconv ! video/x-raw,format=(string)BGRx ! "
                            "videoconvert! appsink", cv2.CAP_GSTREAMER)

2. cap = cv2.VideoCapture("v4l2src device=/dev/video1 !"
                       "video/x-raw, format=(string)UYVY, width=(int)3840, height=(int)2160 ! nvvidconv !"
                       "video/x-raw(memory:NVMM), format=(string)I420, width=(int)1920, height=(int)1080 !"
                       "videoconvert! appsink  overlay-w=1920 overlay-h=1080 sync=false",
                        cv2.CAP_GSTREAMER)

3. cap = cv2.VideoCapture('nvarguscamerasrc sensor-id=0 ! '
     'format=(string)NV12, framerate=(fraction)120/1 ! '
     'nvvidconv flip-method=0 ! '
     'video/x-raw, width=(int)1920, height=(int)1080, format=(string)BGRx ! '
     'videoconvert ! '
     'video/x-raw, format=(string)BGR ! appsink', cv2.CAP_GSTREAMER)

4. cap = cv2.VideoCapture("gst-launch-1.0 v4l2src device=/dev/video0 ! video/x-raw(memory:NVMM), width=(int)1280, height=(int)720,format=(string)I420, framerate=(fraction)24/1 ! nvvidconv flip-method=2 ! video/x-raw, format=(string)BGRx ! videoconvert ! video/x-raw, format=(string)BGR ! appsink")

Basically, I have tried a mix-match of different arguments that go into the GStreamer pipeline according to the document and other online hacks but nothing seems to work.

I have also checked the pipeline according to the document (The 4th one in the list but directly on terminal as a gst-launch-1.0 command and not opencv) and the gst-launch-1.0 command by itself works well, and the camera opens and I can see the live feed. But when I use OpenCV most of the times I get “GStreamer: unable to start pipeline in function cvCaptureFromCAM_GStreamer” error. I am using dual camera and video0 and video1 work for me from the terminal.

I have compiled OpenCV 3.4.0 from source.

Can someone please help me and let me know what could possibly be wrong in the pipeline and how I can start capturing the video?

Thank you in advance!

Cheers,
T

What is the working pipeline if you use gst-launch-1.0?

gst-launch-1.0 v4l2src device=/dev/video ! “video/x-raw, format=(string)UYVY, width=(int)1920, height=(int)1080” ! nvvidconv ! “video/x-raw(memory:NVMM), format=(string)I420, width=(int)1920, height=(int)1080” ! nvoverlaysink overlay-w=1920 overlay-h=1080 sync=false

The above works with gst-launch-1.0. While using the same with opencv I replace nooverlaysink with appsink and I also remove the overlay-w=1920 overlay-h=1080 sync=false arguments.

Hi Tejaswini,
According to your provided sample code of passing GStreamer command via OpenCv looks improper.
2nd in the list:-
overlay-w and overlay-h are not the property of appsink so, this will fail.

4th in the list:-

v4l2src device=/dev/video0 ! video/x-raw(memory:NVMM), width=(int)1280, height=(int)720,format=(string)I420, framerate=(fraction)24/1 !

Above pipeline also not able to use in this e-CAM30_CUXVR because from v4l2src can’t push to memory:NVMM directly.

Also 1st and 3rd not possible because these camera modules did not support nvarguscamerasrc GStreamer plugins elements.

For example can you please try below pipeline once and let me know:-

v4l2src device=/dev/video0 ! “video/x-raw, format=(string)UYVY, width=(int)1920, height=(int)1080” ! nvvidconv ! “video/x-raw(memory:NVMM), format=(string)I420, width=(int)1920, height=(int)1080” ! appsink

OR

v4l2src device=/dev/video0 ! “video/x-raw, format=(string)UYVY, width=(int)1920, height=(int)1080” ! videoconvert ! appsink

Thanks
Ritesh Kumar
e-con Systems India Pvt.Ltd

Hello Ritesh,

Thank you for getting back.

The first one that you mentioned doesn’t work:

v4l2src device=/dev/video0 ! “video/x-raw, format=(string)UYVY, width=(int)1920, height=(int)1080” ! nvvidconv ! “video/x-raw(memory:NVMM), format=(string)I420, width=(int)1920, height=(int)1080” ! appsink

The second one, works:
v4l2src device=/dev/video0 ! “video/x-raw, format=(string)UYVY, width=(int)1920, height=(int)1080” ! videoconvert ! appsink

This is one more version that I tried before getting your reply and this works too:
pipeline = 'v4l2src device=/dev/video0 ! video/x-raw, format=(string)UYVY, width=(int)1920, height=(int)1080 ! nvvidconv ! video/x-raw(memory:NVMM), format=(string)I420, width=(int)1920, height=(int)1080 ! nvvidconv ! videoconvert ! tee ! appsink ’

Is there any documentation to understand the arguments going into the pipeline? It would be good to know what they mean before using them.

Thank you!

You may read this topic, although old and for tx2 onboard camera with bayer sensor, for basic knowledge.

In short, you may access your camera in 2 ways:
1- V4L2 API:

  • If it is a UVC USB camera, it should provide V4L API in a /dev/videoX node (where X=0,1,…)
  • If it is a CSI camera, and you have driver with device-tree supporting it, it would provide a V4L node in /dev/video (usually these get first video dev nodes such as video0 for a single CSI cam).

In both cases you should be able to use v4l2src plugin, it would receive frames in CPU memory (video/x-raw) in the format you specify as output caps, if listed by

v4l2-ctl -d /dev/videoX --list-formats-ext

In your case it seems you are able to receive UYUV frames into CPU memory.

2- NV proprietary path:
For a CSI camera, it may also use a bypass and use dedicated HW ISP for further conversion…this usually implies you have a SW setup from a camera/board vendor (or some default camera supported…OV5693 for TX1/TX2/Xavier, or IMX 219 for nano).
With this you would use nvarguscamerasrc in recent releases (was nvcamerasrc in earlier ones) that would output to NVMM memory (contiguous adresses suitables for DMA, GPU, ISP, NVDEC, NVENC…), and refered as video/x-raw(memory:NVMM) in gstreamer caps.

Opencv mainly works on RGB color frames, so it expected in early versions RGB/BGR for color (or GRAY8 for monochrome), from CPU memory.
So standard caps before appsink for color frames would be video/x-raw, format=BGR.
(Note that recent opencv version can handle YUV formats as video capture if you can process your frames from luminance or chrominance).

So the main point is how you can convert CPU UYVY format (from v4l2src) or whatever you can get from your CSI camera/vendor driver in NVMM memory if any, into CPU memory BGR format.

Your first working pipeline uses videoconvert plugin for this. It may work with low resolutions/framerates as it is CPU only. (You may try to add plugins queue before and after so that it might be launched on a different CPU core).

Another way for higher framerates/resolutions would be using ISP with nvvidconv. Not sure for recent releases, but it used to require at least one among input or output to be in NVMM memory.
So your last working pipeline is first using nvvidconv for converting into I420 and outputing to NVMM memory, while the second nvvidconv just copies back to CPU memory, last I420 to BGR conversion being done on CPU with videoconvert.

You may try to convert with nvvidconv into BGRx format, so that videoconvert will just have to remove the extra byte for providng BGR to opencv. It may give better performance.