VideoCapture fails to open onboard camera L4T 24.2.1 OpenCV 3.1

I’m trying to get the following code snippet to open a stream to the onboard camera, but can’t.

#include <stdio.h>
#include <opencv2/opencv.hpp>

using namespace cv;
using namespace std;

int main(int argc, char** argv)
{
  VideoCapture cap("nvcamerasrc ! video/x-raw(memory:NVMM), width=(int)1280, height=(int)720,format=(string)I420, framerate=(fraction)24/1 ! nvvidconv flip-method=2 ! video/x-raw, format=(string)BGRx ! videoconvert ! video/x-raw, format=(string)BGR ! appsink");

  if (!cap.isOpened())
    {
      cout << "Failed to open camera." << endl;
      return -1;
    }

  for(;;)
    {
      Mat frame;
      cap >> frame;
      imshow("original", frame);
      waitKey(1);
    }

  cap.release();
}

The snippet was taking from the following post: https://devtalk.nvidia.com/default/topic/943129/jetson-tx1/highgui-error-v4l-v4l2-while-opening-camera-with-opencv4tegra-l4t-r24/post/4921383/#4921383.

I built the test program against opencv 3.1.0 with the following:

g++ -o test -I /usr/include/opencv -Wall test.cpp -L/usr/lib -l:libopencv_core.so.3.1.0 -l:libopencv_videoio.so.3.1.0 -l:libopencv_imgproc.so.3.1.0 -l:libopencv_highgui.so.3.1.0

.

The program doesn’t crash, but simply fails to open the video capture, and writes the output “Failed to open camera” to the console.

I built opencv 3.1 from source using the procedure given here: http://docs.opencv.org/master/d6/d15/tutorial_building_tegra_cuda.html

It’s also worth noting that using a simple test gstreamer pipe string like:

VideoCapture cap("fakesrc ! videoconvert ! appsink");

also does not work

I’m using L4T 24.2.1, Ubuntu 16.04 (using JetPack 2.3.1) Any help is greatly appreciated.

Do you have the driver installed for your camera ?
Does it need a service running ? If yes, is it running ?
Are you sure you have no other program using the same camera ?

Thanks for responding.

I assumed that the drivers for the onboard camera were preinstalled. I assume they are, since I can get a live video running using gstreamer in the console with

gst-launch-1.0 nvcamerasrc fpsRange="30.0 30.0" ! 'video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)I420, framerate=(fraction)30/1' ! nvtee ! nvvidconv flip-method=2 ! 'video/x-raw(memory:NVMM), format=(string)I420' ! nvoverlaysink -e

I have launched no other program using the camera - and as this is a fresh install (with the exception of the updated opencv version), I wouldn’t imagine that there is another program using the camera.

Whether it needs a service running, I’m not sure. The code snippet I provided is a gstreamer pipeline, perhaps there’s a gstreamer service that needs to be running. I will look into it.

Most of these tools are pretty new to me, so thanks for your patience.

Hi ian_riley,

What error message do you have? I have tried your pipeline and got preview window successfully.

I am using opencv 3.2.0 which is the latest version from opencv github.

Hi WanyeWWW,

No error message, the isOpened() method just always returns false. I had issues installing 3.2, but I’ll try again, can you tell me how you went about it?

I directly copy and paste your code snippets. Just modify your g++ command to meet libopencv.3.2.0.

I can see camera capture result shown on my display.

Hi ian_riley,

Have your clarified and resolved this issue?
Or still need the support from our side?

Thanks

Kaycc,

No I haven’t resolved the issue. When I use a logitech c310 via the usb port, the videocapture opens, so I’ve been using that. The onboard camera I still haven’t gotten to work.

Thanks Wayne, I’m curious what steps you followed to build and install opencv 3.2.

ian_riley,

I followed similar steps for installing opencv3.1.

Steps:
http://dev.t7.ai/jetson/opencv/

Thanks again Wayne,

So I built opencv 3.2, and linked against it with

g++ -o test -I /usr/include/opencv2 -Wall test.cpp -L/usr/lib -l:libopencv_core.so.3.2.0 -l:libopencv_videoio.so.3.2.0 -l:libopencv_imgproc.so.3.2.0 -l:libopencv_highgui.so.3.2.0

Importantly, when I looked back at the instructions I followed for building 3.1, the cmake flag WITH_GSTREAMER was set to off (http://docs.opencv.org/master/d6/d15/tutorial_building_tegra_cuda.html). I changed that when I build 3.2, but I still don’t get video a video preview. I really thought that setting the flag correctly would make gstreamer work … perhaps I’m compiling incorrectly?

It also took me a while to install opencv3.2.0. I think turning the flag on is necessary.

Please make sure all your gst component in the pipeline is workable.

If c++ is not working, please try this python code. Both cases work fine on my device.

import sys
import cv2

def read_cam():
    cap = cv2.VideoCapture("nvcamerasrc ! video/x-raw(memory:NVMM), width=(int)1280, height=(int)720,format=(string)I420, framerate=(fraction)24/1 ! nvvidconv flip-method=2 ! video/x-raw, format=(string)BGRx ! videoconvert ! video/x-raw, format=(string)BGR ! appsink")
    if cap.isOpened():
        cv2.namedWindow("demo", cv2.WINDOW_AUTOSIZE)
        while True:
            ret_val, img = cap.read();
            cv2.imshow('demo',img)
            cv2.waitKey(10)
    else:
     print "camera open failed"

    cv2.destroyAllWindows()


if __name__ == '__main__':
    read_cam()

hi wayne,

I installed opencv as like http://dev.t7.ai/jetson/opencv/

I get black screen output with jetson onboard camera.

please help. thanks in advance

I got it working with a slightly different pipeline (with format I420 between nvvidconv and videoconvert).
You may check it there: [url]https://devtalk.nvidia.com/default/topic/1001696/jetson-tx1/failed-to-open-tx1-on-board-camera/post/5117370/#5117370[/url]

Trying to get this working above but hitting an error…!

"(python:11184): Gtk-WARNING **: cannot open display: "

I’m thinking this is because i am trying to run this script during an ssh session? Any tips on getting this to work would be MUCH appreciated? Is there not requirement to set the client IP before running this script??

Edit: i did also try…

$ sudo “export DISPLAY=:0” python nvidia_code.py

but im still hitting errors…

Available Sensor modes :
2592 x 1944 FR=30.000000 CF=0x1109208a10 SensorModeType=4 CSIPixelBitDepth=10 DynPixelBitDepth=10
2592 x 1458 FR=30.000000 CF=0x1109208a10 SensorModeType=4 CSIPixelBitDepth=10 DynPixelBitDepth=10
1280 x 720 FR=120.000000 CF=0x1109208a10 SensorModeType=4 CSIPixelBitDepth=10 DynPixelBitDepth=10

NvCameraSrc: Trying To Set Default Camera Resolution. Selected 1280x720 FrameRate = 24.000000 …

Invalid MIT-MAGIC-COOKIE-1 key
(python:13796): Gtk-WARNING **: cannot open display: :0

Looks like a X display issue (Is it different just displaying an image?).
If you are launching this through ssh from host at IP ww.xx.yy.zz with its X server running, set DISPLAY to:

export DISPLAY=ww.xx.yy.zz:0.0

You could also try

xhost +

on the host where you expect to see display.

For the ssh connection try “ssh -Y name@host” to automatically forward and authorize any GUI from the remote to pop up on the local host (the -Y should do some DISPLAY and other auth work).

@linuxdev - you are a legend the -Y flag worked like a charm (im fairly new to programming so thats defo something i can keep!). It seems the frame rate is seriously slow atm but im guessing i can change that with the G streamer pipeline or maybe look at some other options in python. Main thing for now is i can get some camera output. Seriously thanks !

Regarding remote display versus local display in X11…anything running in the X environment generates a series of events, and those events are what the X server deals with, eventually producing graphics rendering through the graphics card as a side effect of those events being processed.

Locally driven displays have a fairly direct route to reaching video card rendering; events are somewhat delayed when instead going to a remote system and its security. This can be fast remotely if you are looking at control or vector operations; if rendering bitmaps, then you might be sending an event for each pixel.

Differences do not stop there though. When running a program on a remote Jetson and displaying locally to a PC none of the actual rendering libraries (the things a GPU talks to) run on the Jetson…these offload to the desktop PC. If you have hardware acceleration on the Jetson, then it no longer participates. Rendering via GPU instead goes through the desktop PC and its libraries. Not knowing this can be a big shock…in some cases the PC won’t be very fast, and in others (such as CUDA) you might find that the 1080Ti is doing the CUDA instead of the Jetson (and if you were not aware of this and think the Jetson is running that fast you’re in for an unpleasant surprise). If the desktop PC did not have the correct version of CUDA then the program would completely fail.

Somewhere in the middle, if you are serious about using the Jetson’s computing power, yet want to display on a PC, you’ll need some form of virtual desktop. The Jetson would render to a virtual screen which has no actual hardware connected, but the GPU and CUDA would not know or care…the remote PC then gets updated via this virtual desktop instead of via X events. In that case the PC would not need CUDA of its own for the Jetson to do CUDA work and to display correctly no matter what the PC configuration is (this would even be operating system agnostic).

wow ok this is pretty overwhelming but hardly unexpected given the nature of the device.

To be perfectly honest - rendering on a pc / remotely is not the goal per se - i’m looking at embedding a neural network onto a standalone device to do image classification in environments where connectivity cannot be guaranteed (as many people may be who are using the jetson). For arguments sake in my use case i only care about positive examples so one work around could be to just send 10 stills from the camera stream of the positives up to the cloud rather than streaming in real time. The stream has been more of a demo to prove the concept.

Question (which i will be able to test tomorrow). If i run the script locally from the jetson with it plugged into a monitor / similar i;m guessing the frame rate will be much quicker? it sounds like the running on jetson and displaying locally on my mac I may as well be using a rasperry pi !!

Frame rate on a script run from the Jetson and displaying to the Jetson (DISPLAY environment variable to the Jetson) will have a speed based on the GPU of the Jetson. Among embedded devices that will be quite fast…only the TX2 would be faster (and it is much faster, yet pin-compatible). If the Jetson has no monitor, but has a virtual X11 desktop, it will be just as fast (CUDA does not care if the frame buffer is associated with a real monitor or oblivion). CUDA and video are both products of GPU acceleration on the Jetson under those conditions.

Sending X events to a remote system (X forwarding) could be faster if the remote PC has a faster GPU (a 1080Ti is much faster than a Jetson’s GPU). Even so, you get network slowdowns, so it depends on the data…it is a tug-of-war game between network slowdowns and beefier graphics on the PC or Mac.

Sending a virtual desktop to a remote system depends on network bandwidth for video rate only; GPU work would be from the Jetson and independent of frame rate for display. It is possible that a Mac displaying a virtual desktop over a directly attached gigabit (meaning through a switch) could be very fast since the Jetson is using its GPU, but it would still be slower than if displaying directly on the Jetson. Unless your Mac had an NVIDIA video card with CUDA installed to the Mac of the same version as the Jetson you could not remote display to the Mac via X event forwarding…only a virtual desktop would work.