How to use USB Webcam in Jetson Nano with Python and OpenCV?

How to use USB Webcam in Jetson Nano with Python and OpenCV? What is the argument of cv2.videoCapture() method?

Hi @Nishtha2002,

I am going to move this topic over to the Jetson Nano category for better visibility.

You would first get your USB camera formats from v4l2-ctl (provided by apt package v4l-utils). Assuming your USB cam to be video node /dev/video1:

v4l2-ctl --device=/dev/video1 --list-formats-ext

and post these here for further advice.

v4l2-ctl will show :
what would be the next step?

No more lines after that ?

No, there are no lines after that

check your your nano and camera devices:
$ ls /dev/video*

According to the results, use the v4l2-ctl command:
for example, /dev/video0 is there?
$ v4l2-ctrl --device=/dev/video0 --list-formats-ext

$ ls /dev/video* shows /dev/video0
$ v4l2-ctl --device=/dev/video0 --list-formats-ext shows ioctl: VIDIOC_ENUM_FMT

You may also try:

v4l2-ctl --device=/dev/video0 --all

It shows:

It doesn’t look to be a USB camera but a v4l2loopback node.
The v4l2loopback doesn’t look being feed (size is 0).
You would tell more about your case.

I have connected a USB 3.0 camera to Jetson Nano. It works properly on software provided with camera SDK and also runs with its sample program. I try to run the camera with OpenCV Python but I don’t know how to run it. I will try to run the below python script but it will show: Process Finished with exit code 132 (Interrupted by signal 4: SIGILL)
I will attach the python script which I run and the output of the lsusb (493 Bytes)

Opencv may read camera from several capture backends, such as V4L or gstreamer.

Before using Opencv, you would try to get your camera feed into one of these backends.
Check the SDK of your camera and see if it can provide V4L interface or a gstreamer camera source plugin.
Telling the camera model might help.

I use Daheng Imaging camera. Model no. is MER-500-14U3M-L

I fail to find details about it, but it seems the SDK provides python examples. Once you get the acquired frame, you can wrap the buffer into a cv2.Mat and then you should be able to use opencv algorithms (may require to be converted into GRAY8 or BGR).