TX2 Could not connect to video device(/dev/video0) please check connection

  1. I flashed TX2 with Jetpack3.3
  2. I want to run Openpose on my TX2, so I deploy it as:
    https://github.com/CMU-Perceptual-Computing-Lab/openpose/blob/master/doc/installation_jetson_tx2_jetpack3.3.md
./scripts/ubuntu/install_caffe_and_openpose_JetsonTX2_JetPack3.3.sh
./build/examples/openpose/openpose.bin -camera_resolution 640x480 -net_resolution 128x96

I see the result from my USB CAMERA. However the camera is a deep camera.
I want to run it access board camera(/dev/video0)

  1. After I run this:
gst-launch-1.0 nvcamerasrc fpsRange="30.0 30.0" ! 'video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)I420, framerate=(fraction)30/1' ! nvtee ! nvvidconv flip-method=2 ! 'video/x-raw(memory:NVMM), format=(string)I420' ! nvoverlaysink -e

I have a window show image from board camera.
then Ctrl^C I finished it.

  1. restart command:
./build/examples/openpose/openpose.bin -camera_resolution 640x480 -net_resolution 128x96

However, there is nothing.

  1. I learn that:
    OpenPose on NVIDIA Jetson TX2 | OpenPose | RidgeRun Developer
    Note: For OpenCV to be able to access the Jetson’s on board camera via GStreamer, it is required to rebuild OpenCV after GStreamer has been installed.
$sudo apt install libgstreamer1.0-dev libgstreamer-plugins-base1.0-dev gstreamer1.0-plugins-base libgstreamer-plugins-bad1.0-dev gstreamer1.0-plugins-bad
   $sudo apt-get purge libopencv* 
   $git clone https://github.com/jetsonhacks/buildOpenCVTX2.git
   $cd buildOpenCVTX2
   $./buildOpenCV.sh

I try to use camorama and cheese, they are not work.
camorama:
Could not connect to video device(/dev/video0) please check connection
cheese:
(cheese:23288):cheese-WARNING **: Device ‘/dev/video0’ cannot capture in the speccified format: gstv4l2object.c(3482): gst_v4l2_object_set_format_full (): /GstCameraBin:camerabin/GstWrapperCameraBinSrc:camera_source/GstBin:bin18/GstV4l2Src:v4l2src1:
Tried to capture in YU12, but device returned format BG10

until now, I still don’t know why I can’t use camera0, even when I want to return back, I can’t use USB camera.
please help me. I fix it whole day…

ls /dev/video*

result:
/dev/video0 /dev/video1

When you run cheese, which user are you logged in as? The default users “ubuntu” and “nvidia” on the R28.x releases (which JetPack3.3 should install) are members of the “video” group. If a user is not a member of “video”, then permission would be denied. You have the device special files, which says the drivers found the cameras and is ready…perhaps it is just a permission issue.

To see who is a member of group video:

grep video /etc/group

I run anything with user nvidia.

grep video /etc/group
result: video:x:44:ubuntu,nvidia

when I run

./build/examples/openpose/openpose.bin -camera_resolution 640x480 -net_resolution 128x96

I am sure the user is nvidia too.

I am not a “camera guy”, but assuming it isn’t a permission issue, then the earlier message on camera format is probably just telling you there is no conversion:

(cheese:23288):cheese-WARNING **: Device '/dev/video0' cannot capture in the speccified format: gstv4l2object.c(3482):

…if you specify a format which the camera doesn’t support, then it won’t force the camera to that format.

Someone else may know of a pipeline for conversion of format.

If I offensive you please forgive me, I am not a English first language.

OK, thank you.
I will go to learn something about camera format and pipeline.
Do you have some suggestion about learning TX2 camera to work?
After all, I find information about TX2 is not too much.

I am going to ask other readers of the forum what pipeline to use with cheese for “BG10”. I see above this is the format. If you can, please post the camera model and any specifications so others can read this. If someone with knowledge of “BG10” format can offer advice it would be appreciated.

Cameras all have different formats and conversions need to be made in some cases. I personally do not have experience with that, but once the format is correct everything should mostly “just work”.

Hi @sysescool,

As linuxdev point it out above, it seems that you are facing a camera format issue.

First of all, it is important to recall that USB cameras generally provides a different format (YUV YUY2) than the onboard camera. The onboard camera provides Bayer RAW10 format. If you use nvcamerasrc or nvarguscamerasrc GStreamer elements, the camera stream will pass through the ISP unit that will perform a debayering processing and convert the frames to YUV I420,NV12,UYVY formats.

In the case of the following GStreamer pipeline, you are capturing from the on board camera passing trough the ISP unit getting 1920x1080 YUV I420 colorspace format frames at 30 fps (you could check the negotiated caps):

gst-launch-1.0 -v nvcamerasrc fpsRange="30.0 30.0" ! 'video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)I420, framerate=(fraction)30/1' ! nvvidconv flip-method=2 ! 'video/x-raw(memory:NVMM), format=(string)I420' ! nvoverlaysink -e

If you instead use the v4l2src element as the following example pipeline, you will capture frames on Bayer RAW10 colorspace format:

gst-launch-1.0 -v v4l2src device=/dev/video0 ! "video/x-bayer, format=rggb, width=1920, height=1080" ! identity silent=false ! fakesink

Each GStreamer element has input and output capabilities that negotiate when linking between them after running the pipeline. There are no videosink element that supports Bayer format, so a conversion to YUV format must be necessary. It is important to check the capabilities of the elements, to avoid compatibility issues.

Cheese and camorama are webcam (USB cameras) focused apps. So, these applications must require camera sensors that give YUV format as all the USB cameras do. I don’t think these apps will work with the onboard camera.

In regards to your openpose example not working with the onboard camera, there are several things to check:

./build/examples/openpose/openpose.bin -camera_resolution 640x480 -net_resolution 128x96
  1. Be sure that openpose supports the onboard TX2 camera (apparently it only supports webcams). I extract this from this link https://github.com/CMU-Perceptual-Computing-Lab/openpose/blob/master/doc/installation_jetson_tx2_jetpack3.3.md : “It is for now recommended to use an external camera with the demo.”

  2. Be sure that the onboard camera supports 640x480 resolution that is the one you are trying to run with the openpose example. Run the following pipeline to check that:

gst-launch-1.0 -v nvcamerasrc fpsRange="30.0 30.0" ! 'video/x-raw(memory:NVMM), width=(int)640, height=(int)480, format=(string)I420, framerate=(fraction)30/1' ! nvoverlaysink -e

If you want to use the onboard camera, there are several options that you must check:

  1. Libargus API
  2. GStreamer with nvcamerasrc, nvarguscamerasrc or v42lsrc elements
  3. Tegra Multimedia API examples

Below you will find some useful links about using the onboard camera with GStreamer pipelines:
Accelerated_GStreamer_User_Guide
https://developer.download.nvidia.com/embedded/L4T/r31_Release_v1.0/Docs/Accelerated_GStreamer_User_Guide.pdf?-nB1Yao8CAjppByS9JW2LyzfFVsAhUWF-Vp8cU6DgyYs_XZIx9uxmm94PqE1kdukJpHe67vkeg4rHYP5-kNlKYmis1aPH5IdatOiDI2Cj3fvYk-ZlmTB3ltS6b1FPMB6UMoVd-b5W-TN-5nkqlIbtVNrkLoQbGz7OIkOlLfMWaXxvEVfgyg
CSI Cameras on the TX2 (The Easy Way)
http://petermoran.org/csi-cameras-on-tx2/#selecting-the-right-pipelines
Gstreamer_pipelines_for_Jetson_TX1
https://developer.ridgerun.com/wiki/index.php?title=Gstreamer_pipelines_for_Jetson_TX1
L4T Multimedia API Reference Documentation
https://docs.nvidia.com/jetson/l4t-multimedia/mmapi_build.html

thank you linuxdev and dgarba

“It is for now recommended to use an external camera with the demo.” I really know it.
However we have a Binocular camera to build 3D scene with ROS, to control cost and some other reasons,
we’d better hope use board camera.

I really thank all of you give me valuable suggestion and the way to learn TX2.

I will try my best to use board camera.

Thank you.