Hi,
I recently bought 4 x Leopard Imaging IMX477 cameras, 1 x Multi Camera Adapter with 6 ports and TX2 developer kit. LI sent 2 .txt files for the instructions to install necessary drivers for the cameras. I followed every step correctly and expected to get a live video output from IMX477 camera, but ended up with green screen. According to Leopard Imaging’s guides, there are several ways to get the live video output from the camera:
1) using nvgstcapture-1.0, I should have got the live output. The only thing important here is to make sure that there is a camera on J1 port of the multi camera adapter, which I did make sure there is. Running the following command on the terminal should have been enough to get the live video output:
nvgstcapture-1.0
But somehow, all I can see is a green screen.
Messages on the terminal:
vid_rend: syncpoint wait timeout
vid_rend: syncpoint wait timeout
vid_rend: syncpoint wait timeout
vid_rend: syncpoint wait timeout
Socket read error. Camera Daemon stopped functioning.....
** Message: <main:5374> Capture completed
** Message: <main:5424> Camera application will now exit
2) using gstreamer is another way to capture live image. According to LI’s guides, running the following command should have worked:
gst-launch-1.0 nvcamerasrc fpsRange="20.0 20.0" sensor-id=0 ! 'video/x-raw(memory:NVMM), width=(int)4056, height=(int)3040, format=(string)I420, framerate=(fraction)20/1' ! nvtee ! nvvidconv flip-method=2 ! 'video/x-raw, format=(string)I420' ! xvimagesink -e
But again, the output is nothing but a green screen.
Messages on the terminal:
Received error from camera daemon....exiting....
Socket read error. Camera Daemon stopped functioning.....
Got EOS from element "pipeline0"
Execution ended after 0:00:16.224172704
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
Setting pipeline to NULL ...
Freeing pipeline ...
3) using argus is another way to do this. After the installation of argus software (which I did), I should have been able to get the live video output by running the following command on terminal:
argus_camera --device=0
should have given the output of the camera on J1 port. Argus application opens correctly, but can’t display any image when I push the ‘Capture’ button.
Messages on the terminal:
Executing Argus Sample Application (argus_camera)
Argus Version: 0.96.2 (multi-process)
(Argus) Error EndOfFile: Unexpected error in reading socket (in src/rpc/socket/client/ClientSocketManager.cpp, function recvThreadCore(), line 212)
(Argus) Error EndOfFile: Receiving thread terminated with error (in src/rpc/socket/client/ClientSocketManager.cpp, function recvThreadWrapper(), line 315)
(Argus) Error InvalidState: Receive thread is not running cannot send. (in src/rpc/socket/client/ClientSocketManager.cpp, function send(), line 94)
(Argus) Error InvalidState: (propagating from src/rpc/socket/client/SocketClientDispatch.cpp, function dispatch(), line 101)
Segmentation fault (core dumped)
4) Using VideoCapture in OpenCV is also a way to capture images. According to Leopard Imaging’s guides, following code should have worked:
VideoCapture cap("nvcamerasrc ! video/x-raw(memory:NVMM), width=(int)1280, height=(int)720,format=(string)I420, framerate=(fraction)24/1 ! nvvidconv flip-method=2 ! video/x-raw, format=(string)BGRx ! videoconvert ! 'video/x-raw, format=(string)BGR' ! appsink");
Code is compiling, but I get the following error when I run it:
VIDEOIO ERROR: V4L: device nvcamerasrc ! video/x-raw(memory:NVMM), width=(int)1280, height=(int)720,format=(string)I420, framerate=(fraction)24/1 ! nvvidconv flip-method=2 ! video/x-raw, format=(string)BGRx ! videoconvert ! 'video/x-raw, format=(string)BGR' ! appsink: Unable to query the number of channels
I looked it up on the internet and found a solution, which helped me with this error. I added CAP_FFMPEG in the second argument of VideoCapture object and tried to display the image:
VideoCapture cap("nvcamerasrc ! video/x-raw(memory:NVMM), width=(int)1280, height=(int)720,format=(string)I420, framerate=(fraction)24/1 ! nvvidconv flip-method=2 ! video/x-raw, format=(string)BGRx ! videoconvert ! 'video/x-raw, format=(string)BGR' ! appsink", CAP_FFMPEG);
namedWindow("Test", CV_WINDOW_KEEPRATIO);
bool loop = true;
Mat image;
while(loop){
bool readImage = cap.read(image);
imshow("Test", image);
if(waitKey(10) == 27){
loop = false;
}
}
destroyAllWindows();
But I get the following error (which is probably because the captured image is empty or image is not captured at all, or the colorspace is different somehow) when I run it:
OpenCV Error: Assertion failed (size.width>0 && size.height>0) in imshow, file /home/nvidia/src/opencv-3.4.0/modules/highgui/src/window.cpp, line 331 terminate called after throwing an instance of 'cv::Exception'
what(): /home/nvidia/src/opencv-3.4.0/modules/highgui/src/window.cpp:331: error: (-215) size.width>0 && size.height>0 in function imshow
<b>The program has unexpectedly finished</b>
I looked it up on the internet to find a solution. Some people said the camera produces raw data and somehow I have to convert the image to RGB. I tried some methods of colorspace transformation with cvtColor function with different arguments, but they also didn’t help in OpenCV.
I should also tell that WiFi module of TX2 Developer Kit stopped working after the installation of camera drivers. I am running the following command to list all installed network drivers:
sudo lshw -C network
Normally I should have seen Wireless Interface too, but I can’t see it anymore after the installation of camera drivers. I can only see ethernet interface.
Is there anyone who faced the same/similar problems and managed to solve them, or just give me some advice about them ?