how to get TX1 camera in OpenCV

Hi,

I’m trying to get the TX1 camera frames in opencv by :

#include <stdio.h>
#include <opencv2/opencv.hpp>

using namespace cv;
using namespace std;

int main(int argc, char** argv )
{
    VideoCapture cap("nvcamerasrc ! video/x-raw(memory:NVMM), 
width=(int)1280, height=(int)720,format=(string)I420, framerate=(fraction)24/1 ! nvvidconv flip-method=2 
! video/x-raw(memory:NVMM), format=(string)I420 ! nvoverlaysink -e ! appsink"); // open the camera
    if(!cap.isOpened()){ 
 // check if we succeeded
        return -1;
}

    Mat edges;
    namedWindow("edges",1);
    for(;;)
    {
        Mat frame;
        cap >> frame; // get a new frame from camera
        cvtColor(frame, edges, COLOR_BGR2GRAY);
        GaussianBlur(edges, edges, Size(7,7), 1.5, 1.5);
        Canny(edges, edges, 0, 30, 3);
        imshow("edges", edges);
       // if(waitKey(30) >= 0) break;
    }
    return 0;
}

but I only get the first frame. how can I get the camera working in openCV.
the output is :

Invalid FPSRange Input
Inside NvxLiteH264DecoderLowLatencyInitNvxLiteH264DecoderLowLatencyInit set DPB and MjstreamingInside NvxLiteH265DecoderLowLatencyInitNvxLiteH265DecoderLowLatencyInit set DPB and Mjstreaming
Available Sensor modes :
2592 x 1944 FR=30.000000 CF=0x10d9208a isAohdr=0
2592 x 1458 FR=30.000000 CF=0x10d9208a isAohdr=0
1280 x 720 FR=120.000000 CF=0x10d9208a isAohdr=0
2592 x 1944 FR=24.000000 CF=0x10d9208a isAohdr=1

and I see the first frame.

Thanks a lot in advance,
HN

HN:

Your pipeline is configured with a frame rate of 24 FPS:

nvcamerasrc ! video/x-raw(memory:NVMM), <i>width=(int)1280, height=(int)720</i>,format=(string)I420, <b>framerate=(fraction)24/1</b> ! nvvidconv flip-method=2 ! video/x-raw(memory:NVMM), format=(string)I420 ! nvoverlaysink -e ! appsink

But the driver is telling you that mode is not available, and what is available is:

2592 x 1944 FR=30.000000 CF=0x10d9208a isAohdr=0
2592 x 1458 FR=30.000000 CF=0x10d9208a isAohdr=0
<i>1280 x 720</i> <b>FR=120.000000</b> CF=0x10d9208a isAohdr=0
2592 x 1944 FR=24.000000 CF=0x10d9208a isAohdr=1

You need to set your framerate to 120fps for that resolution.

nvcamerasrc ! video/x-raw(memory:NVMM), <i>width=(int)1280, height=(int)720</i>,format=(string)I420, <b>framerate=(fraction)120/1</b> ! nvvidconv flip-method=2 ! video/x-raw(memory:NVMM), format=(string)I420 ! nvoverlaysink -e ! appsink

Note: if you still want 24 fps you may be able to use the videorate element to accomplish that, though you’ll need to copy into system memory at that point.

Thank you for your response,
I did set the resolution to 2592*1944 and fps to 24.0 and I got this error:

Invalid FPSRange Input
Inside NvxLiteH264DecoderLowLatencyInitNvxLiteH264DecoderLowLatencyInit set DPB and MjstreamingInside NvxLiteH265DecoderLowLatencyInitNvxLiteH265DecoderLowLatencyInit set DPB and Mjstreaming
Available Sensor modes :
2592 x 1944 FR=30.000000 CF=0x10d9208a isAohdr=0
2592 x 1458 FR=30.000000 CF=0x10d9208a isAohdr=0
1280 x 720 FR=120.000000 CF=0x10d9208a isAohdr=0
2592 x 1944 FR=24.000000 CF=0x10d9208a isAohdr=1

NvCameraSrc: Trying To Set Default Camera Resolution. Selected 2592x1944 FrameRate = 24.000000 …

Socket read error. Camera Daemon stopped functioning…
GStreamer Plugin: Embedded video playback halted; module nvcamerasrc0 reported: Internal data flow error.

NvCameraSrc: socket write failed… ret=-1
OpenCV Error: Unspecified error (GStreamer: unable to start pipeline
) in icvStartPipeline, file /home/ubuntu/opencv-3.0.0/modules/videoio/src/cap_gstreamer.cpp, line 383
terminate called after throwing an instance of ‘cv::Exception’
what(): /home/ubuntu/opencv-3.0.0/modules/videoio/src/cap_gstreamer.cpp:383: error: (-2) GStreamer: unable to start pipeline
in function icvStartPipeline

Aborted

What commands/flags to you use to compile the program?

I can compile this code, but when I go to run it, it does nothing. If you run this in the terminal,

gst-launch-1.0 nvcamerasrc fpsRange="30.0 30.0" ! 'video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)I420, framerate=(fraction)30/1' ! nvtee ! nvvidconv flip-method=2 ! 'video/x-raw(memory:NVMM), format=(string)I420' ! nvoverlaysink -e

the camera comes up just fine (Ctrl+C to exit), but nothing seems to be working in OpenCV on the Jetson TX1, at least, not for me.

Hi hnikoo,

Please use the following Gstreamer command.

VideoCapture cap("nvcamerasrc ! video/x-raw(memory:NVMM), 
width=(int)1280, height=(int)720,format=(string)I420, framerate=(fraction)24/1 ! nvvidconv flip-method=2 
! video/x-raw, format=(string)BGRx ! videoconvert ! 'video/x-raw, format=(string)BGR' ! appsink"); // open the camera

kayccc,

I tried your gstreamer pipeline in my very similar openCV sample code (modified from bgfg_segm.cpp in the samples directory):

//this is a sample for foreground detection functions
int main(int argc, const char** argv)
{
    help();

    //CommandLineParser parser(argc, argv, keys);
    bool useCamera = true;//parser.get<bool>("camera");
    //string file = parser.get<string>("file_name");
    //VideoCapture cap;
    bool update_bg_model = true;

    //if( useCamera )
    //VideoCapture cap("device://nvcamera0u"); // open the camera
    VideoCapture cap("nvcamerasrc ! video/x-raw(memory:NVMM), width=(int)1280, height=(int)720,format=(string)I420, framerate=(fraction)24/1 ! nvvidconv flip-method=2 ! video/x-raw, format=(string)BGRx ! videoconvert ! 'video/x-raw, format=(string)BGR' ! appsink"); // open the camera
    //cap.open(camString);
    //else
    //    cap.open(file.c_str());
    //parser.printParams();

    if( !cap.isOpened() )
    {
        printf("can not open camera or video file\n%s", "");
        return -1;
    }

then I ran strace on my binary to see what system calls were happening and we see this failed
system call:

open(“nvcamerasrc ! video/x-raw(memory:NVMM), width=(int)1280, height=(int)720,format=(string)I420, framerate=(fraction)24/1 ! nvvidconv flip-method=2 ! video/x-raw, format=(string)BGRx ! videoconvert ! ‘video/x-raw, format=(string)BGR’ ! appsink”, O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory)

Anybody have any idea on the right way to get video out of the camera on the TX1? I know it is possible because the visionworks samples / demos can open the camera stream in this example (main_nvgstcamera_capture.cpp) :

int main(int argc, char** argv)
{
    nvxio::Application &app = nvxio::Application::get();

    nvxio::FrameSource::Parameters config;
    config.frameWidth = 1280;
    config.frameHeight = 720;
    config.fps = 30;

    //
    // Parse command line arguments
    //

    vx_uint32 cameraID = 0u;
    std::string resolution = "1280x720", input;

    std::ostringstream stream;

    app.setDescription("This sample captures frames from NVIDIA GStreamer camera");
    app.addOption('c', "camera", "Input camera device ID", nvxio::OptionHandler::unsignedInteger(&cameraID,
        nvxio::ranges::atMost(1u)));
    app.addOption('r', "resolution", "Input frame resolution", nvxio::OptionHandler::oneOf(&resolution,
        { "2592x1944", "2592x1458", "1280x720", "640x480" }));
    app.addOption('f', "fps", "Frames per second", nvxio::OptionHandler::unsignedInteger(&config.fps,
        nvxio::ranges::atLeast(10u) & nvxio::ranges::atMost(120u)));

    app.init(argc, argv);

    stream << "device://nvcamera" << cameraID;
    input = stream.str();

    parseResolution(resolution, config);

    //
    // Create OpenVX context
    //

    nvxio::ContextGuard context;

    //
    // Messages generated by the OpenVX framework will be processed by nvxio::stdoutLogCallback
    //

    vxRegisterLogCallback(context, &nvxio::stdoutLogCallback, vx_false_e);

    //
    // Create a Frame Source
    //

    std::unique_ptr<nvxio::FrameSource> source(nvxio::createDefaultFrameSource(context, input));

When will the next version of linux for tegra be released with a video for linux driver for the camera?

Hi geoffreywall

From error it seems, OpenCV is not using Gstreamer path and pipeline string is being interpreted as V4L2 device.
Set GST_DEBUG environment variable before launching application.
e.g. export GST_DEBUG=3

It will show if Gstreamer path is being used or not. If yes then It will provide more details about errors.

So here is some c++ code that is using opencv and gstreamer that seems to be working

int main(int argc, const char** argv)
{

    putenv("GST_DEBUG=*:3");
    bool useCamera = true;//parser.get<bool>("camera");
    bool update_bg_model = true;
    char* gst = "nvcamerasrc ! video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, format=(string)I420, framerate=(fraction)24/1 ! nvvidconv flip-method=2 ! videoconvert ! appsink";

    VideoCapture cap(gst);

    if( !cap.isOpened() )
    {
        printf("can not open camera or video file\n%s", "");
        return -1;
    }

I downloaded and built openCV from github using version 2.4.
I then linked my program against this version.

My gstreamer pipeline is simpler than the one originally posted by kayccc. I got it working basically
by trial and error.

I guess I’ll see if this gstreamer pipeline will work when linking against the tegra versions of opencv.

I also had to put my custom openCV compiled version into my LD_LIBRARY_PATH before running my program:
LD_LIBRARY_PATH=$LD_LIBRARY_PATH:.:/home/ubuntu/my_opencv/lib

Hmmm. I tried pretty much all pipelines above, my custom built openCV-2.4.11 is linked against, but all still fails… The last version says:

module nvcamerasrc0 reported: Internal data flow error

I can run the pipeline from command line just fine…

Edit: found the line that works for me:

"nvcamerasrc ! video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, format=(string)I420, framerate=(fraction)24/1 ! nvvidconv flip-method=2 ! video/x-raw, format=(string)I420 ! videoconvert ! video/x-raw, format=(string)BGR ! appsink"

As far as I came across it is not possible to acquire the camera video stream using only openCV4Tegra for now and this is due to a bug.

It is possible with VisionWorks but also in this case it is far from being stable. Installing some packages will lead to application failures. (If you install QTCreator for example, which is the IDE that openVX itself advice to get started with their libraries, all the VisionWorks examples will fail, I’d like to recall that VisionWorks is an extension of openVX).

It is so disappointing given the fact the this should be one of the core applications for this device, this all leave me stuck with my work.

Hi Anacleto86,

We’re going to have an update openCV4Tegra in the next Jetson TX1 release, should be coming soon.

Thanks

Hello Kayccc,

Will the update to openCV4Tegra be built with gstreamer support so that we do not need to re-compile openCV? Also, do you have a timeframe for the next release?

Thanks

If they fix the bug with v4l I think GStreamer might not be necessary (as long as you don’t have specific needs). If you tried to compile opencv 3.1 in fact you can open the video stream with the usual VideoCapture object. Let’s hope they won’t keep us waiting for too long…

When is the next release of jetpack?

How can I call the api in “C”?

Thanks

Hi,

Please follow this topic

https://devtalk.nvidia.com/default/topic/987537/videocapture-fails-to-open-onboard-camera-l4t-24-2-1-opencv-3-1/

it’s my method.
to setting opencv.
opencv3.2.0 on tx2
https://devtalk.nvidia.com/default/topic/1022685/open-cv-camera-error-jetson-tx2/?offset=16#5230719