These 4 cameras has the properties:
e-CAM130_CUXVR is a synchronized multiple 4K camera solution
for NVIDIA® Jetson AGX Xavier development kit that has up to four
13 MP 4-Lane MIPI CSI-2 camera boards.
When I use Python to capture 4k video with the code:
import cv2
import numpy as np
cap=cv2.VideoCapture('gst-launch-1.0 v4l2src ! xvimagesink')
while cap.isOpened():
ret, frame = cap.read()
npy = cv2.imread(frame)
cv2.imshow("", frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
I get the error :
VIDEOIO ERROR: V4L: device gst-launch-1.0 v4l2src ! xvimagesink: Unable to query number of channels
Is there any Gstreamer pipeline recommended to preview these cams with 4k with 19 fps?(It is said on econ systems’ document that I can get 19 fps with 4k resolution.
Hi Alperylmcx,
Thanks for choosing econ systems product,
Regarding your query ,can you please try ximagesink as videosink instead of xvimagesink in your pipeline as given below,
gst-launch-1.0 v4l2src ! ximagesink
For more details on gstreamer pipelines you can refer gstreamer usage guide document for sample pipelines.
As additional info the e-CAM130_CUXVR supports 4K resolution@30fps.
I’ve managed to get webcam preview with
cap=cv2.VideoCapture('gst-launch-1.0 v4l2src device=/dev/video0 num-buffers=450 ! videoconvert ! appsink', cv2.CAP_GSTREAMER)
But it is not 30 FPS. Can you recommend any alterations on above pipeline?
Hi Alperylmcx,
Glad you got it worked.
Since gstreamer is using mmap as IO method we cannot achieve 30fps with gstreamer.
However we can improve frame rate,please refer this blog to improve the frame rate using opencv
Hi Alperylmcx,
Glad you got it worked.
Since gstreamer is using mmap as IO method we cannot achieve 30fps with gstreamer.
However we can improve frame rate,please refer this blog to improve the frame rate using opencv
Accessing cameras in OpenCV with high performance
Hi Velmurugan
I just got my Jetson Xavier and my Quad Camera E-Con Systems kit.
I can view the 4 cameras with vlc pointed at /dev/videoX and run both e-con systems demo programs.
However it is impossible for me to use the VisionWorks examples or cv2.Videocapture.
Is there any pointers at what to read in order to correct this?
Thanks a lot
This is now working with this code
import cv2
import numpy as np
# FUNCIONA
#cap=cv2.VideoCapture("v4l2src device=\"/dev/video1\" ! videoconvert ! appsink")
cap0=cv2.VideoCapture("v4l2src device=\"/dev/video0\" ! video/x-raw, width=(int)640, height=(int)480 ! videoconvert ! appsink")
cap1=cv2.VideoCapture("v4l2src device=\"/dev/video1\" ! video/x-raw, width=(int)640, height=(int)480 ! videoconvert ! appsink")
cap2=cv2.VideoCapture("v4l2src device=\"/dev/video2\" ! video/x-raw, width=(int)640, height=(int)480 ! videoconvert ! appsink")
cap3=cv2.VideoCapture("v4l2src device=\"/dev/video3\" ! video/x-raw, width=(int)640, height=(int)480 ! videoconvert ! appsink")
while cap0.isOpened():
ret0, frameyuv0 = cap0.read()
ret1, frameyuv1 = cap1.read()
ret2, frameyuv2 = cap2.read()
ret3, frameyuv3 = cap3.read()
cv2.imshow("w0", frameyuv0)
cv2.imshow("w1", frameyuv1)
cv2.imshow("w2", frameyuv2)
cv2.imshow("w3", frameyuv3)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
However, I still can’t use any of the VisionWorks samples.