Enabling camera on Jetson TX1 board

You have to follow exactly the scenario in the documentation, then it will appear. Patch mentioned is supplied on the forum, hah.

Just watch out for the CONFIG_SOC_CAMERA_PLATFORM, if you enable it, the virtual test device will take over and the materialized /dev/video0 will not be the camera.

Still it is of not much use as, quoting from documentation:

So you cannot use it with gstreamer pipeline and the encoders, it seems.

Will there be any update to this situation?

  1. Enable ISP support in the v4l2 implementation?
  2. release the code for the nvcamerasrc plugin, so that other cameras and resolutions be supported

Any info or resource is appreciated

Hi,

Thanks for all the information on this post. It really helps!

I am having the same problem, I would recommend to create a nvisp gstreamer element so people could add it right after v4l2src when the camera only gives bayer. If you want I can create the element and make the element available here but I would need the ISP API so it can be used within the gstreamer element. Some SoC vendors don't like to release or make the use of the ISP public, is this the case of NVIDIA?

If you check the multimedia User Guide it includes pipelines to capture using v4l2src but they require /dev/video0. Due the current limitations those would work only with the UVC driver, there should be a note in the document about it.

Meanwhile I will continue my tests with nvcamerasrc. In order to add support for other MIPI CSI sensors is it possible to create an script or a dummy driver that will only initialize the camera sensor to inject data in the input? would nvcamerasrc grab that data and make the buffers available for the other elements? For instance, what if the one injecting MIPI data is an FPGA?

Is it possible to capture YUV directly if we use other sensor?

It calls my attention that when I run nvgstcapture-1.0 I can see in top two processes:

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
757 root 20 0 830496 84904 21504 S 61.0 2.2 8:13.76 nvcamera-daemon
4380 ubuntu 20 0 191340 10256 5004 S 12.6 0.3 0:02.83 nvgstcapture-1.

What is nvcamera-daemon?

Thanks,

-David Soto

I flashed my Jetson TX1 with the latest Jetpack (Linux For Tegra R23.2), and the following command works perfectly:

gst-launch-1.0 nvcamerasrc fpsRange="30.0 30.0" ! 'video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)I420, framerate=(fraction)30/1' ! nvtee ! nvvidconv flip-method=2 ! 'video/x-raw(memory:NVMM), format=(string)I420' ! nvoverlaysink -e

I tried to use the following python program to receive images from webcam:

source: http://opencv-python-tutroals.readthedocs.org/en/latest/py_tutorials/py_gui/py_video_display/py_video_display.html

import numpy as np
import cv2

cap = cv2.VideoCapture(0)

while(True):
    # Capture frame-by-frame
    ret, frame = cap.read()

    # Our operations on the frame come here
    gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)

    # Display the resulting frame
    cv2.imshow('frame',gray)
    if cv2.waitKey(1) & 0xFF == ord('q'):
        break

# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()

I got the following error:

OpenCV Error: Assertion failed (scn == 3 || scn == 4) in cvtColor, file /hdd/buildbot/slave_jetson_tx_2/35-O4T-L4T-Jetson-L/opencv/modules/imgproc/src/color.cpp, line 3739
Traceback (most recent call last):
  File "webcam.py", line 11, in <module>
    gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
cv2.error: /hdd/buildbot/slave_jetson_tx_2/35-O4T-L4T-Jetson-L/opencv/modules/imgproc/src/color.cpp:3739: error: (-215) scn == 3 || scn == 4 in function cvtColor

I know the problem is that it cannot receive images from webcam. I also changed the code to just show the received image from webcam, but it gives me error that means no image it get from camera.

I also tried to use C++ with the following code:

#include "opencv2/opencv.hpp"
using namespace cv;
int main(int argc, char** argv)
{
    VideoCapture cap;
    // open the default camera, use something different from 0 otherwise;
    // Check VideoCapture documentation.
    if(!cap.open(0))
        return 0;
    for(;;)
    {
          Mat frame;
          cap >> frame;
          if( frame.empty() ) break; // end of video stream
          imshow("this is you, smile! :)", frame);
          if( waitKey(1) == 27 ) break; // stop capturing by pressing ESC 
    }
    // the camera will be closed automatically upon exit
    // cap.close();
    return 0;
}

and it compiled without any errors using

g++ webcam.cpp -o webcam `pkg-config --cflags --libs opencv`

But again, when I’m running the program I receive this error:

$ ./webcam
Unable to stop the stream.: Device or resource busy
Unable to stop the stream.: Bad file descriptor
VIDIOC_STREAMON: Bad file descriptor
Unable to stop the stream.: Bad file descriptor

What I’ve missed? Is there any command I should run to activate webcam before running this program?

Could you please point to those instructions?

OK, so download documentation first:
http://developer.nvidia.com/embedded/dlc/l4t-documentation-23-2

Then when opened in your browser, navigate to “Video for Linux User Guide” section: “Example Sensor: OV5693”.

There are the instructions :)

However, you can find the patch mentioned somewhere in the forum here

i’m wondering if this patch will be included as part of the kernel in the release that has been suggested will be coming “soon”. i have applied patches before but i’m having problems making a proper patch file out of the information that was in the email that was presented as a patch so i got frustrated and quit hoping that they do correct the kernel in the next release. linuxdev, do you have any info on if the nest release will have a corrected kernel?

I do not know the answer, but typically any known issue such as that is included.

ok. fingers crossed it will be. maybe i make a new attempt.

Hi all,

Hope this isn’t a silly question, but I ran this command

gst-launch-1.0 nvcamerasrc fpsRange="30.0 30.0" ! 'video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)I420, framerate=(fraction)30/1' ! nvtee ! nvvidconv flip-method=2 ! 'video/x-raw(memory:NVMM), format=(string)I420' ! nvoverlaysink -e

and it worked great. The camera turned on and I got a full-screen image. While I still had mouse access, I was unable to find a way to shut off the camera and return to the main Ubuntu screen. How do I shut off the camera in this state without physically forcing the Jetson to shut off?

You can use Ctl-C in the console

1 Like

Thank you very much David!

Hi David

I need some help.

The V4L2 software implementation bypasses the Tegra ISP, and is suitable for use when Tegra ISP support is not required, such as with sensors or input devices that provide data in YUV format.

The above stattement is from JetsonTX1 documents.

But I would like to know whether normal basic v4l2 implementation in LINUX supports ISP or not??

And could you also help me regarding “How the data flows from ov5693 sensor to display through v4l2?”

If possible please guide me to understand how the data flows from sensor

Hi alamuru,

 What do you mean by "normal basic v4l2 implementation"? 

The main V4L2 capture driver is normally developed by the vendor of the SoC because they are the ones with the documentation about how to set up the front end/capture subsystem of the SoC, they know what kind of memory is used to store the capture frames, alignment, etc. It depends on the vendor to include support at the kernel level (on the main V4L2 driver) for the ISP or not. Normally what other developers could do is to develop a subdevice driver (camera sensor driver for instance) that can be attached to the main driver created by the vendor (Nvidia in this case).

The ISP is normally used to adjust contrast on the image, run some image correction algorithms, face detection, histograms, bayer-to-yuv conversion (normally encoders need yuv), etc. That is why it is important to have access to the ISP. However, it doesn’t mean that if you don’t have access to the ISP you wouldn’t be able to capture. That is what Nvidia supports right now, you can grab frames from the Video Input but you cannot pass them through the ISP so you don’t use those algorithms. So if you are planning to encode the image that is captured with V4L2 it is better to look for a sensor that will give you that frame in YUV already.

Vendors normally don’t like to open the API for the ISP, to give you an example: qualcomm [1], what they do is to use the main V4L2 capture driver to forward requests to a daemon that is actually running on user space to execute capture tasks using the ISP, and that daemon is only a binary so you cannot change it!. To make it worse in case of qualcomm the binary is wired to the camera sensor driver so can’t support other sensors more than the default ones.

I think Nvidia did something similar with the daemon on their original idea (I hope I am wrong), when you run capture using nvcamerasrc you can see that a daemon starts in the background [2], I suspect that it is because nvcamerasrc can use the ISP. However, I read in the forum that they are planning to release the source code of nvcamerasrc so maybe they changed their mind about it, but don't make plans based on this. Use a sensor that gives YUV.

In any case, once you bypassed the ISP, grabbed the frame with the V4L2 capture driver you only need to push it on the display subsystem, nvidia created a gstreamer element for that, it is called nvoverlaysink, you only need to use it [3]. You can read the technical reference manual available on nvidia website to get a better idea of what each subsystem does.

BTW, the pipeline in [3] is using nvcamerasrc but if your sensor gives YUV it should work in the same way with v4l2src.

-David

[1] Jetson TX1 | Tegra vs. SnapDragon | Jetson TX1 CUDA
[2] https://devtalk.nvidia.com/default/topic/926578/nvcamera-daemon/
[3] Jetson TX1 GStreamer Pipelines | Jetson TX1 ARM Load

Thank you for the help David.

What I meant by “normal basic v4l2 implementation” was the v4l2 source code before vendor modifying it for their specific SoC.

Anyways Now I got the point.

But I also would like to know about how the data from image sensor is flowing.

I mean the difference in data flow “when you capture using nvcamera” and “when you capture using v4l2 capture application or Yavta”

Can I get any block diagram for the captured data flow in these two cases.

Hi David

It would be great if you can you help me on this too.

I’d like to know how to capture raw image using nvcamerasrc.

I want to compare bit packing of v4l2 raw image and nvcamerasrc raw image.

Hello,

I read on the forums that gstreamer can be used to access the on-board camera stream
Is there some code/guideline available on how to access the camera feed and get frames for processing?
However I am facing errors while trying to do it in C++ and OpenCV2.4.11 that I compiled from source.

here is the error,

ubuntu@tegra-ubuntu:~/code/camera$ ./bin

(bin:16258): GStreamer-WARNING **: 0.10-style raw video caps are being created. Should be video/x-raw,format=(string).. now.
0123
Available Sensor modes : 
2592 x 1944 FR=30.000000 CF=0x10d9208a isAohdr=0
2592 x 1458 FR=30.000000 CF=0x10d9208a isAohdr=0
1280 x 720 FR=120.000000 CF=0x10d9208a isAohdr=0
2592 x 1944 FR=24.000000 CF=0x10d9208a isAohdr=1

NvCameraSrc: Trying To Set Default Camera Resolution. Selected 640x480 FrameRate = 30.000000 ...

GStreamer Plugin: Embedded video playback halted; module nvcamerasrc0 reported: Internal data flow error.
OpenCV Error: Unspecified error (GStreamer: unable to start pipeline
) in icvStartPipeline, file /home/ubuntu/opencv-2.4.11/modules/highgui/src/cap_gstreamer.cpp, line 383
terminate called after throwing an instance of 'cv::Exception'
  what():  /home/ubuntu/opencv-2.4.11/modules/highgui/src/cap_gstreamer.cpp:383: error: (-2) GStreamer: unable to start pipeline
 in function icvStartPipeline

Aborted
ubuntu@tegra-ubuntu:~/code/camera$

For the following code:

//using g++ opencv.cpp `pkg-config --cflags --libs opencv`
#include "opencv2/opencv.hpp"
#include <iostream>

using namespace std;
using namespace cv;

int main(int, char**)
{

    VideoCapture cap("nvcamerasrc ! 'video/x-raw, format=(string)RGB, width=(int)640, height=(int)480 ! ffmpegcolorspace ! video/x-raw-rgb ! appsink");
    
    const char* env = "GST_DEBUG=*:3";
    putenv((char*)env);
    cout<< "0";


    cout<< "1";
    if(!cap.isOpened())
    {
        cout << "cant open camera" << endl;
        return -1;
    }
        cout<<"2";

    Mat frame;
    for(;;)
    {
        cout <<"3";
        cap >> frame; // get a new frame from camera
        cout << "4";
        imwrite("1.png", frame); 
        cout<< "5";
}
    return 0;
}

Is there some code/guideline available on how to access the camera feed and get frames for processing?

Regards,
Ankit

Hello Ankit, which L4T version are you using? Also, do you have a customized OS that was installed differently from the standard L4T version as installed by JetPack (this would seem to be the case because you compiled OpenCV4Tegra yourself)?

Hi everybody.

I hope this is not silly question but while Im trying to make a facedetection try with a camera Im getting this error:

Device 0:  "NVIDIA Tegra X1"  3853Mb, sm_53, Driver/Runtime ver.7.50/7.0
HIGHGUI ERROR: V4L/V4L2: VIDIOC_CROPCAP
select timeout
select timeout

Here is what Im typing in console.

./cascadeclassifier --cascade ~/opencv-2.4.9/data/haarcascades/haarcascade_frontalface_alt.xml --camera "nvcamerasrc ! 'video/x-raw, format=(string)RGB, width=(int)640, height=(int)480 ! ffmpegcolorspace ! video/x-raw-rgb ! appsink"

Could you give me a hint what to do about it ?

Hello, Bart3k:
Can you try your code in openCV 3.0 (or higher)? Or you can test that by a USB camera instead of Jetson TX1 on-board camera.

It seems that openCV 2.4.9 does not parse GST pipeline correctly. I have tried openCV 3.1.0 and it works well with correct GST pipeline for TX1 on-board camera.

br
ChenJian

Hello jachen and ctichenor,

I am now using OpenCV 3.1.0 but still unable to imwrite() frames using the onboard camera.
Are there any pointer you could provide?
(using L4T r24 on TX1)

Regards,
Ankit