When I trying to use webcam(playstation eye3) with opencv 2.4.13(c++)

like VideoCapture camera(1);

–> I confirmed my camera device number is 1.
–> when I connect usb to tx2, linux generated /dev/video1 file immediately

I got this error.

I already installed gspca/ov534 driver and when I test my webcam on website, it worked.

I installed v4l2ucp, v4l-utils and libv4l-dev

but It didn’t work actually…with opencv or cheese webcam booth
help me please…

I installed opencv 2.4.13 by jetpack3.1

hm… and it also doesn’t work with cheese.

$ sudo cheese -d /dev/video1
–>> segmentation fault(core dumped)

I don’t know what to do…

this file is screenshot of my bash.

result of scanning video devices was successful.

What are the formats supported by your camera ?

v4l2-ctl -d /dev/video1 --list-formats

For opencv, it expects gray8 or BGR, so if it isn’t provided, you have to make the conversion.
One way to do that is with gstreamer, but opencv4tegra doesn’t support it, you would have to build your own opencv-3.2.0 with ENABLE_GSTREAMER=ON for using a gstreamer pipeline in opencv.

Thank you for your reply Honey_Patouceul.

The result was little sad… its format is ‘YUYV’

I want to accelerate opencv performance using gpu via opencv4tegra version 2.4.13.

Is gstreamer working only for opencv version 3.2.0 or later?

Is there anyway to use this camera with version 2.4.13?

Or, Can I use gpu with opencv 3.2.0 build in tx2??
( here’s awesome link I think. (github-OPENCVTX2 https://github.com/jetsonhacks/buildOpenCVTX2 )

AFAIK, gstreamer support is available in opencv-3.x, it may be available for opencv2-4.13, I don’t know.
[EDIT: looking at http://docs.opencv.org/3.2.0/d6/d15/tutorial_building_tegra_cuda.html#tutorial_building_tegra_cuda_opencv_24X it seems it’s supported for 2.4.13 as well].
Opencv4tegra is 2.4.13 based, it has many optimizations for tegra, but is not able to support gstreamer (it’s closed source, you cannot rebuild it).

Anyway, you can use gpu with opencv3.2 (enable CUDA arch 6.2 for TX2).

Gstreamer has many plugins and can handle your YUYV to BGR conversion suitable for Opencv. Maybe last versions of opencv can directly handle YUYV as input.

One thing to be aware of is that opencv will read frames from CPU memory, and you will have to copy frames into gpu memory for CUDA processing, and probably copy back to CPU for further processing/display/sending. This will induce a latency, and may be a bottleneck if you are looking for high framerate with high resolution.

Thank you again Honey_Patouceul!!

Can you help me last one more time please??

I build opencv 3.2.0 with gpu and gstreamer.

And I make a simple application with python 2.7.12 and it was not gpu supported.

After Googling, I found that I needed to use gpumat from opencv, but I see only the C ++ example.
Is there Gpu acceleration module for python in opencv 3.2.0???

Lastly, I really don’t know how to copy frames to gpu memory for cuda processing…
If I copy the frames, does the gpu operation proceed automatically?

I love you Patouceul

Thanks, but isn’t this a bit early ? We know so few about each other… ;-)

I’m afraid I’ll disappoint you early as well…I’m not familiar with python API, I’m using opencv with C++.
I would just advise to be sure that you’ve enabled python2 support for opencv-3.2, and if you have not purged opencv4tegra, check which opencv version is used by python with:

import sys
import cv2

In C++, the functions for copying to/from GPU are upload/download. You may read the opencv doc to know what is available from gpu.