jetson nano raspberry pi camera not working

Jetson nano raspberry pi camera not working

tried using opencv to capture image but this is the error im getting

v4l2-ctl -d /dev/video0 --list-formats
ioctl: VIDIOC_ENUM_FMT
Index : 0
Type : Video Capture
Pixel Format: ‘RG10’
Name : 10-bit Bayer RGRG/GBGB

Error “V4L2: Pixel format of incoming image is unsupported by OpenCV”

You may read this topic.

Can you pls show some python example converting it to OpenCV compatible frame

You may try this one.
If your release doesn’t have nvcamerasrc, you would use nvarguscamerasrc instead, and NV12 format instead of I420, as mentioned in the link from my previous post.
Note that your opencv lib should have been built with gstreamer support, but this should be ok.

im getting this error, please help

(python3:26947): GStreamer-CRITICAL **: 19:42:18.884: 
Trying to dispose element pipeline0, but it is in READY instead of the NULL state.
You need to explicitly set elements to the NULL state before
dropping the final reference, to allow them to clean up.
This problem may also be caused by a refcounting bug in the
application or some element.

OpenCV Error: Unspecified error (GStreamer: unable to start pipeline
) in cvCaptureFromCAM_GStreamer, file /home/nvidia/build_opencv/opencv/modules/videoio/src/cap_gstreamer.cpp, line 887
VIDEOIO(cvCreateCapture_GStreamer (CV_CAP_GSTREAMER_FILE, filename)): raised OpenCV exception:

/home/nvidia/build_opencv/opencv/modules/videoio/src/cap_gstreamer.cpp:887: error: (-2) GStreamer: unable to start pipeline
 in function cvCaptureFromCAM_GStreamer

Traceback (most recent call last):
  File "testcamera.py", line 19, in <module>
    read_cam()
  File "testcamera.py", line 5, in read_cam
    cap = cv2.VideoCapture("nvcamerasrc ! video/x-raw(memory:NVMM), width=(int)1280, height=(int)720,format=(string)I420, framerate=(fraction)24/1 ! nvvidconv flip-method=2 ! video/x-raw, format=(string)BGRx ! videoconvert ! video/x-raw, format=(string)BGR ! appsink")

Have you tried that ?

Ok, I installed opencv from source with gstreamer support etc
using https://github.com/AastaNV/JEP/blob/master/script/install_opencv4.0.0_Xavier.sh

changed to nvarguscamerasrc instead

but still this is the error im getting

[ WARN:0] VIDEOIO(cvCreateFileCapture_FFMPEG_proxy(filename)): trying ...

[ WARN:0] VIDEOIO(cvCreateFileCapture_FFMPEG_proxy(filename)): result=(nil) isOpened=-1 ...

[ WARN:0] VIDEOIO(createGStreamerCapture(filename)): trying ...

GST_ARGUS: Creating output stream
CONSUMER: Waiting until producer is connected...
GST_ARGUS: Available Sensor modes :
GST_ARGUS: 3280 x 2464 FR = 21.000000 fps Duration = 47619048 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 3280 x 1848 FR = 28.000001 fps Duration = 35714284 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1920 x 1080 FR = 29.999999 fps Duration = 33333334 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1280 x 720 FR = 59.999999 fps Duration = 16666667 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1280 x 720 FR = 120.000005 fps Duration = 8333333 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: Running with following settings:
   Camera index = 0 
   Camera mode  = 2 
   Output Stream W = 1920 H = 1080 
   seconds to Run    = 0 
   Frame Rate = 29.999999 
GST_ARGUS: PowerService: requested_clock_Hz=13608000
GST_ARGUS: Setup Complete, Starting captures for 0 seconds
GST_ARGUS: Starting repeat capture requests.
CONSUMER: Producer has connected; continuing.
GST_ARGUS: Cleaning up
GST_ARGUS: 
PowerServiceHwVic::cleanupResources
CONSUMER: Done Success
GST_ARGUS: Done Success

(python3:16800): GStreamer-CRITICAL **: 11:59:00.148: gst_element_get_state: assertion 'GST_IS_ELEMENT (element)' failed
[ WARN:0] VIDEOIO(createGStreamerCapture(filename)): result=(nil) isOpened=-1 ...

[ WARN:0] VIDEOIO(cvCreateCameraCapture_V4L(filename.c_str())): trying ...

[ WARN:0] VIDEOIO(cvCreateCameraCapture_V4L(filename.c_str())): result=(nil) ...

[ WARN:0] VIDEOIO(createFileCapture_Images(filename)): trying ...

[ WARN:0] VIDEOIO(createFileCapture_Images(filename)): result=(nil) isOpened=-1 ...

[ WARN:0] VIDEOIO(createMotionJpegCapture(filename)): trying ...

[ WARN:0] VIDEOIO(createMotionJpegCapture(filename)): result=(nil) isOpened=-1 ...

camera open failed

You should show the failing code you’re trying, it would be easier for finding out what could be the issue.
Here it seems you’re using ffmpeg, but it suspect this can only be used with a video file, not a camera.

You may also enable gstreamer debugging for checking why the pipeline fails to start.
Can you try this code and post the output if it still fails ?

import sys
import cv2
import gi
gi.require_version('Gst', '1.0')
from gi.repository import Gst

def read_cam():
        cap = cv2.VideoCapture("nvarguscamerasrc ! video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080,format=(string)NV12, framerate=(fraction)30/1 ! nvvidconv ! video/x-raw, format=(string)BGRx ! videoconvert ! video/x-raw, format=(string)BGR ! appsink")
        if cap.isOpened():
                cv2.namedWindow("demo", cv2.WINDOW_AUTOSIZE)
                while True:
                        ret_val, img = cap.read();
                        cv2.imshow('demo',img)
                        if cv2.waitKey(1) == ord('q'):
				break
        else:
                print ("camera open failed")

        cv2.destroyAllWindows()

if __name__ == '__main__':
        print(cv2.getBuildInformation())
        Gst.debug_set_active(True)
        Gst.debug_set_default_threshold(3)
        read_cam()

Thanks a lot!, you saved me, This works perfectly fine

also this article will help others

https://www.jetsonhacks.com/2019/04/02/jetson-nano-raspberry-pi-camera/

For some reason its not working within ROS
I am just getting the cap.isOpened() as false
camerao pen failed

no debug message also

Please help, thanks

import sys
import cv2
import gi
gi.require_version('Gst', '1.0')
from gi.repository import Gst



class image_converter:

  def __init__(self):
    self.image_pub = rospy.Publisher("image_topic_2",Image,  queue_size=10)
    self.read_cam()
    self.bridge = CvBridge()
    
  def read_cam(self):
    cap = cv2.VideoCapture("nvarguscamerasrc ! video/x-raw(memory:NVMM), width=(int)1280, height=(int)720,format=(string)NV12, framerate=(fraction)30/1 ! nvvidconv ! video/x-raw, format=(string)BGRx ! videoconvert ! video/x-raw, format=(string)BGR ! appsink")
    print(cap.isOpened())
    if cap.isOpened():
	cv2.namedWindow("demo", cv2.WINDOW_AUTOSIZE)
	while True:
	    ret_val, cv_image = cap.read();
	    cv2.imshow('demo',cv_image)
	    
	    try:
	      self.image_pub.publish(self.bridge.cv2_to_imgmsg(cv_image, "bgr8"))
	    except CvBridgeError as e:
	      print(e)

	    if cv2.waitKey(1) == ord('q'):
	      break
    else:
        print ("camera open failed")
        cv2.destroyAllWindows()

def main(args):
  ic = image_converter()
  rospy.init_node('image_converter', anonymous=True)
  try:
    rospy.spin()
  except KeyboardInterrupt:
    print("Shutting down")
  cv2.destroyAllWindows()

if __name__ == '__main__':    
    print(cv2.getBuildInformation())
    Gst.debug_set_active(True)
    Gst.debug_set_default_threshold(5)
    main(sys.argv)

I have no experience with ROS, but I think it does install its own opencv version, so there may be version mismatch. You may find details about opencv version in the output of print(cv2.getBuildInformation()).

ROS experienced users may better advise.

We bought 20 jetson nanos and 20 Camera Module Automatic IR-Cut (https://www.amazon.com/dp/B07DNSSDGG/ref=psdc_172511_t1_B06XTP23LH)

Because we listened and thing everything RPI can do, Jetson nano can do it and better.

But only below message has been shown:

Error generated. /dvs/git/dirty/git-master_linux/multimedia/nvgstreamer/gst-nvarguscamera/gstnvarguscamerasrc.cpp, execute:521 No cameras available.

We re-checked by “Cheese” app. Only message “No device found” is displayed.

Please help us :(
We don’t know how to behave in face of such a situation.

Seems that Jetson nano doesn’t have support for OV5647 sensor.

Check this link https://github.com/NVIDIA-AI-IOT/jetbot/issues/29

Ouch. Camera v1 is unsupported as far as I know. I checked and there is source for a kernel module, however (Actually more than one driver); The one in Linux master is here:

https://github.com/torvalds/linux/blob/master/drivers/media/i2c/ov5647.c

You could try building a new kernel module using this documentation:

But compiling a kernel is very complicated and if you’ve never done it before (and for another architecture), it’s probably best to wait until either Nvidia or the community provides support.

@nvidia: is there anything preventing the Camera module v1 from working over the csi interface provided kernel modules are built for ov5647?

@regivaldojr @mdegans

We’ll try your solution. Thank you so much for your support!

if you ordered from amazon, you could probably return them as a considerably short period of time has passed and you could exchange them to R pi v2 board cameras, in my opinion.
References:

  • Jetson Nano Supported Components List

  • list of resources

    Hi,

    We have been working with OV5647, In case you need help, you can read more about it here:

    https://developer.ridgerun.com/wiki/index.php?title=OmniVision_OV5647_Linux_driver_for_Jetson_Nano