We have a Xavier with dual camera kit from Leopard Imaging: LI-XAVIER-KIT-IMX274M12-D
Using drivers: IMX274_R31.1_Xavier_NV-Tri_2lane_20181130
Jetpack version 4.1.1, OpenCV version 3.4.1
My main questions are at the bottom of the post. The following is a description of the issues I am seeing.
We just got these cameras from Leopard and with their help had gotten to the point where both cameras work simultaneously ONLY if you are viewing both through command line calls. For example, this works to call both cameras from the command line:
For camera 0:
gst-launch-1.0 nvarguscamerasrc sensor-id=0 ! ‘video/x-raw(memory:NVMM), width=(int)3840, height=(int)2160’ ! nvvidconv ! xvimagesink -e
For camera 1:
gst-launch-1.0 nvarguscamerasrc sensor-id=0 ! ‘video/x-raw(memory:NVMM), width=(int)3840, height=(int)2160’ ! nvvidconv ! xvimagesink -e
For both cameras at the same time:
gst-launch-1.0 nvarguscamerasrc sensor-id=0 ! ‘video/x-raw(memory:NVMM), width=(int)3840, height=(int)2160’ ! nvvidconv ! xvimagesink -e & gst-launch-1.0 nvarguscamerasrc sensor-id=1 ! ‘video/x-raw(memory:NVMM), width=(int)3840, height=(int)2160’ ! nvvidconv ! xvimagesink -e
All of those three commands work fine to produce either a single output or a dual output. However, in Python3 when I try to access both cameras I get locked out due to an error that seems to be related to the system not being able to access both simultaneously. A single camera works fine in Python. My very simple python code is below. See the commented lines for comparing camera 1 vs camera 0 vs both simultaneously.
PYTHON CODE:
import cv2
import time
CAMERA 0
cam0 = cv2.VideoCapture(“gstlaunch=1.0 nvarguscamerasrc sensor-id=0 ! video/x-raw(memory:NVMM), width=(int)480, height=(int)270, format=(string)NV12, framerate=(fraction)30/1 ! nvvidconv ! video/x-raw, format=(string)BGRx ! videoconvert ! video/x-raw, format=(string)BGR ! appsink”, cv2.CAP_GSTREAMER)
CAMERA 1
cam1 = cv2.VideoCapture(“gstlaunch=1.0 -e nvarguscamerasrc sensor-id=1 ! video/x-raw(memory:NVMM), width=(int)480, height=(int)270, format=(string)NV12, framerate=(fraction)30/1 ! nvvidconv ! video/x-raw, format=(string)BGRx ! videoconvert ! video/x-raw, format=(string)BGR ! appsink”, cv2.CAP_GSTREAMER)
#cam0 = cam1 #comment this unless testing only cam1 by itself - comment out cam0 if this is uncommented
for i in range(15,215):
… a = time.time()
… ret, im = cam0.read()
… im[(i-5):(i+5),:30,:] = 255 # just a test creating a white block that will move as frames are processed
… cv2.imshow(‘im’, im)
… cv2.waitKey(1)
… b = time.time()
… T = b-a
… fps = 1/T
… print(‘fps = %3.3f’%fps)
cv2.destroyAllWindows()
__________________________ (END PYTHON CODE)
(I had to do the indentations using … since it wasn’t showing up on my screen)
When I try both cameras the first camera is initialized just fine, but it fails on the second cv2.VideoCapture line “cam1 = cv2.VideoCapture(…)”. This is the output that I see in the terminal and then it just hangs until I hit control+C:
__________________ (BEGIN TERMINAL OUTPUT)
nvidia@jetson:~/src/Xavier_Leopard_Vision$ p python3 dual_camera_stream.py
nvbuf_utils: Could not get EGL display connection
GST_ARGUS: Creating output stream
CONSUMER: Waiting until producer is connected…
GST_ARGUS: Available Sensor modes :
GST_ARGUS: 3840 x 2160 FR = 29.999999 fps; Analog Gain range min 1.000000, max 22.000000; Exposure Range min 7000, max 332402000;
GST_ARGUS: 1920 x 1080 FR = 59.999999 fps; Analog Gain range min 1.000000, max 22.000000; Exposure Range min 7000, max 332402000;
GST_ARGUS: 1280 x 720 FR = 59.999999 fps; Analog Gain range min 1.000000, max 22.000000; Exposure Range min 7000, max 332402000;
GST_ARGUS: 1280 x 540 FR = 59.999999 fps; Analog Gain range min 1.000000, max 22.000000; Exposure Range min 7000, max 332402000;
GST_ARGUS: Running with following settings:
Camera index = 0
Camera mode = 3
Output Stream W = 1280 H = 540
seconds to Run = 0
Frame Rate = 59.999999
GST_ARGUS: Setup Complete, Starting captures for 0 seconds
GST_ARGUS: Starting repeat capture requests.
CONSUMER: Producer has connected; continuing.
(NvCameraUtils) Error InvalidState: Mutex already initialized (in Mutex.cpp, function initialize(), line 41)
(Argus) Error InvalidState: (propagating from src/rpc/socket/client/ClientSocketManager.cpp, function open(), line 54)
(Argus) Error InvalidState: (propagating from src/rpc/socket/client/SocketClientDispatch.cpp, function openSocketConnection(), line 201)
(Argus) Error InvalidState: Cannot create camera provider (in src/rpc/socket/client/SocketClientDispatch.cpp, function createCameraProvider(), line 98)
Error generated. /dvs/git/dirty/git-master_linux/multimedia/nvgstreamer/gst-nvarguscamera/gstnvarguscamerasrc.cpp, execute:373 Failed to create CameraProvider
__________________ (END TERMINAL OUTPUT)
After I hit control C, the following is printed after what you see above. This looks like just print statements related to me killing the process but perhaps some of the gstreamer stuff is useful for you:
__________________(BEGIN MORE TERMINAL OUTPUT)
^C
(python3:31632): GStreamer-CRITICAL **: 16:54:16.394:
Trying to dispose element capsfilter5, but it is in PLAYING instead of the NULL state.
You need to explicitly set elements to the NULL state before
dropping the final reference, to allow them to clean up.
This problem may also be caused by a refcounting bug in the
application or some element.
(python3:31632): GStreamer-CRITICAL **: 16:54:16.394:
Trying to dispose element capsfilter4, but it is in PLAYING instead of the NULL state.
You need to explicitly set elements to the NULL state before
dropping the final reference, to allow them to clean up.
This problem may also be caused by a refcounting bug in the
application or some element.
(python3:31632): GStreamer-CRITICAL **: 16:54:16.395:
Trying to dispose element capsfilter3, but it is in PLAYING instead of the NULL state.
You need to explicitly set elements to the NULL state before
dropping the final reference, to allow them to clean up.
This problem may also be caused by a refcounting bug in the
application or some element.
(python3:31632): GStreamer-CRITICAL **: 16:54:16.395:
Trying to dispose element nvvconv1, but it is in PLAYING instead of the NULL state.
You need to explicitly set elements to the NULL state before
dropping the final reference, to allow them to clean up.
This problem may also be caused by a refcounting bug in the
application or some element.
(python3:31632): GStreamer-CRITICAL **: 16:54:16.395:
Trying to dispose element nvarguscamerasrc1, but it is in PLAYING instead of the NULL state.
You need to explicitly set elements to the NULL state before
dropping the final reference, to allow them to clean up.
This problem may also be caused by a refcounting bug in the
application or some element.
(python3:31632): GStreamer-CRITICAL **: 16:54:16.395:
Trying to dispose element pipeline1, but it is in PAUSED instead of the NULL state.
You need to explicitly set elements to the NULL state before
dropping the final reference, to allow them to clean up.
This problem may also be caused by a refcounting bug in the
application or some element.
OpenCV(3.4.1) Error: Unspecified error (GStreamer: unable to start pipeline
) in cvCaptureFromCAM_GStreamer, file /home/nvidia/opencv/modules/videoio/src/cap_gstreamer.cpp, line 890
VIDEOIO(cvCreateCapture_GStreamer (CV_CAP_GSTREAMER_FILE, filename)): raised OpenCV exception:
OpenCV(3.4.1) /home/nvidia/opencv/modules/videoio/src/cap_gstreamer.cpp:890: error: (-2) GStreamer: unable to start pipeline
in function cvCaptureFromCAM_GStreamer
Traceback (most recent call last):
File “dual_camera_stream.py”, line 7, in
cam1 = cv2.VideoCapture(“gstlaunch=1.0 -e nvarguscamerasrc sensor-id=1 ! video/x-raw(memory:NVMM), width=(int)480, height=(int)270, format=(string)NV12, framerate=(fraction)30/1 ! nvvidconv ! video/x-raw, format=(string)BGRx ! videoconvert ! video/x-raw, format=(string)BGR ! appsink”, cv2.CAP_GSTREAMER)
KeyboardInterrupt
GST_ARGUS: Cleaning up
CONSUMER: Done Success
GST_ARGUS: Done Success
__________________(END MORE TERMINAL OUTPUT)
Other stuff:
I should also note that the very first thing that is printed is “nvbuf_utils: Could not get EGL display connection”. Since it may be related, as I have been experimenting I have noticed a number of error messages related to EGL display connections or “nvEGLRenderer”. When I try to build the tegra_multimedia_api examples they seem to succeed but when I try to run them they fail because of nvEGLRenderer. All I am doing is entering a sample directory, for this example “~/tegra_multimedia_api/samples/13_multi_camera”, then running “make”. The output is:
________________________(BEGIN TERMINAL OUTPUT)
nvidia@jetson:~/tegra_multimedia_api/samples/13_multi_camera$ make
Compiling: main.cpp
Linking: multi_camera
________________________ (END TERMINAL OUTPUT)
So far it looks like everything is working fine, but then I try to run it with ./multi_camera and this is the output:
________________________(BEGIN TERMINAL OUTPUT)
nvidia@jetson:~/tegra_multimedia_api/samples/13_multi_camera$ ./multi_camera
nvbuf_utils: Could not get EGL display connection
[INFO] (NvEglRenderer.cpp:110) Setting Screen width 640 height 480
[ERROR] (NvEglRenderer.cpp:197) Unable to get egl display
[ERROR] (NvEglRenderer.cpp:153) Got ERROR closing display
Error generated. main.cpp, execute:376 Failed to create EGLRenderer.
________________________ (END TERMINAL OUTPUT)
So now the questions that I seek answers for:
-
How can I get both cameras to be seen simultaneously in Python? Is my GStreamer pipeline wrong? Do I need to modify the GStreamer pipeline to handle two cameras in one cv2.VideoCapture object?
-
How can I get the tegra_multimedia_api C++ examples to work? How to overcome the EGLRenderer errors?
-
I also noticed that the cameras were often kind of out of sync when I was viewing them using the dual camera command line method described up above. How might I ensure that they are triggered simultaneously, as I want to do stereo vision calculations?
-
Finally, does anyone have any good reference documentation for configuring NVIDIA/Argus specific GStreamer pipelines for use in OpenCV that you can point me to? I would like to be able to configure camera settings like white balance mode, auto exposure settings (the cameras are always flickering to bright/dark images due to what I expect is the auto exposure responding to our office lighting). It is weird because our lights do not change intensity but the camera often gets bright or dark every few seconds.
Thanks a lot, let me know if you have any questions and I will be happy to give more info.