Webcam not opening

I’m trying to make a first pass at opening up my webcam and doing some image labelling …
I’m trying to test a USB webcam .
The webcam is being recognized by lsusb and ls /dev/video* as /video1.
I’m trying to follow along this tutorial as well as this one and this one.

I’ve been able to install " gstreamer1.0-plugins-bad-faad
but not gstreamer1.0-plugins-bad-videoparsers (I got an error … which I think can be ignored because it should only be needed for IP cameras).

Either way, when I run my test script, I get the error:
... GStreamer: pipeline have not been created, along with a “could not read from resource” and “unable to start pipeline” error.

When trying to open Cheese, the camera is listed under “Devices” … but greyed out.

Any ideas would be appreciated.

Thank you!
Any help would be appreciated

You may try using gstreamer with a pipeline such as this one displaying camera feed to X window (assuming you have X server locally running GUI):

gst-launch-1.0 -v v4l2src device=/dev/video1 ! videoconvert ! xvimagesink

For further advice, provide the modes this camera gives with:

# Install package v4l-utils that provides v4l2-ctl command if not yet installed
sudo apt update
sudo apt install v4l-utils

# Query for available modes from /dev/video1
v4l2-ctl -d1 --list-formats-ext

Thanks.
I’ve attached a couple of screen grabs that have the error from:

gst-launch-1.0 -v v4l2src device=/dev/video1 ! videoconvert ! xvimagesink

and the output of the listed formats.


Alright … I do have xvfb installed (sudo apt-get install xvfb) … but i’m guessing it’s not started? Is that what that means?
I found this script … thoughts?

alright … I did manage to the get xvfb file running … but that didn’t seem to fix things.
still researching.

I’m still getting the gstreamer error … Gstreamer pipeline have not been created. Failed to open camera

So assuming you have an X display running and DISPLAY set to this, you may check that you can display. The following uses a software video source so that we rule out a display issue before going into camera:

gst-launch-1.0 videotestsrc ! xvimagesink

If ok, let’s try using the camera. Be sure you plug it into full size USB connector, the micro-USB OTG is not able to acheive high bandwidth.

1. Raw mode YUYV

gst-launch-1.0 -v v4l2src device=/dev/video1 ! video/x-raw, format=YUY2, width=640, height=480, framerate=30/1 ! xvimagesink

2. MJPEG mode

gst-launch-1.0 -v v4l2src device=/dev/video1 ! image/jpeg, format=MJPG,  width=640, height=480, framerate=30/1 ! nvjpegdec ! 'video/x-raw(memory:NVMM), format=I420' ! nvvidconv ! xvimagesink

# Or
gst-launch-1.0 -v v4l2src device=/dev/video1 ! image/jpeg, format=MJPG,  width=640, height=480, framerate=30/1 ! nvv4l2decoder mjpeg=1 ! nvvidconv ! xvimagesink

Thanks. Some small progress … I think.
I was able to run the first command for the software video source.
I got what looked like an old TV broadcast test screen.

None of the next 3 commands worked.
I got this output:

New Bitmap Image (4).bmp (621.3 KB)

EDIT:

Also … on a whim, I tried this pipeline.
gst-launch-1.0 -v videotestsrc ! nvvidconv ! nvoverlaysink

I got a large version of that test screen, over laid over my terminal windows … but I could only see it on the monitor attached to my board and NOT via the VNC session I have open.

EDIT2:

I am able to open/stream from my webcam using nvgstcapture-1.0 --cap-dev-node=0 --camsrc=0

So this sounds like an error with gstreamer … I think.

I’m a bit confused… What succeeds here uses /dev/video0, while you mentionned video1 previously.
If your cam is /dev/video0, you may adjust device property of v4l2src plugin.

welp. i’m an idiot. yes.
gst-launch-1.0 -v v4l2src device=/dev/video0 ! image/jpeg, format=MJPG, width=640, height=480, framerate=30/1 ! nvv4l2decoder mjpeg=1 ! nvvidconv ! xvimagesink works … guh.

alright. now to figure out why my app isn’t opening things right

1 Like

I know this feeling…several times each day. I believe that admitting this is a way for improvement ;-)

Let us know if you need more help for your application.

so … trying to get this to run …

`import numpy as np
import cv2

cap = cv2.VideoCapture("/dev/video1") # check this
while(True):
    # Capture frame-by-frame
    ret, frame = cap.read()

    # Our operations on the frame come here
    gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)

    # Display the resulting frame
    cv2.imshow('frame',gray)
    if cv2.waitKey(1) & 0xFF == ord('q'):
        break

# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()`

Does that mean I need to change the cap = cv2.VideoCapture("/dev/video1") line to the “gst-launch-1.0” line?

That doesn’t seem right.

Setting it to “/dev/video0” gives me a failed to create pipeline error

If you want monochrome frame, you may use a gstreamer pipeline leveraging jetson HW to do so:

cap = cv2.VideoCapture("v4l2src device=/dev/video0 ! image/jpeg, format=MJPG, width=640, height=480, framerate=30/1 ! nvv4l2decoder mjpeg=1 ! nvvidconv ! video/x-raw, format=GRAY8 ! appsink drop=1", cv2.CAP_GSTREAMER)
if not cap.isOpened() :
     ...error failed to open cam from gst backend

# Now you would read one channel gray8 frames

hrmmm … it’s saying 'Cannot identify device ‘/dev/video0’ …

I uninstalled and reinstalled gstreamer thinking that might be the issue:
sudo apt-get install --reinstall gstreamer1.0-alsa gstreamer1.0-libav gstreamer1.0-plugins-bad gstreamer1.0-plugins-base gstreamer1.0-plugins-good gstreamer1.0-plugins-ugly gstreamer1.0-pulseaudio libgstreamer-plugins-bad1.0-0 libgstreamer-plugins-base1.0-0 libgstreamer-plugins-good1.0-0 libgstreamer1.0-0

Unfortunately, now, only the first command, the one that outputs in YUY2 format, works. Is this the correct syntax to use that command in my script?

cap = cv2.VideoCapture("v4l2src device=/dev/video0 ! video/x-raw, format=YUY2, width=640, height=480, framerate=30/1 ! xvimagesink", cv2.CAP_GSTREAMER)

When you want to see the output of the pipeline with gst-launch, you would use a videosink such as xvimagesink for displaying into a X window.

For opencv, the sink of the pipeline would be appsink, receiving BGR or GRAY8 frames from standard memory.

You may try:

# CPU convert
cap = cv2.VideoCapture("v4l2src device=/dev/video0 ! video/x-raw, format=YUY2, width=640, height=480, framerate=30/1 ! videoconvert ! video/x-raw, format=BGR ! appsink drop=1", cv2.CAP_GSTREAMER) 

# HW + CPU convert
cap = cv2.VideoCapture("v4l2src device=/dev/video0 ! video/x-raw, format=YUY2, width=640, height=480, framerate=30/1 ! nvvidconv ! video/x-raw(memory:NVMM) ! nvvidconv ! video/x-raw,format=BGRx ! video/x-raw, format=BGR ! appsink drop=1", cv2.CAP_GSTREAMER) 

Do you have a good link that talks through this syntax?
The gstream doc is a bit … arcane … for me right now.

I have no perfect link for starting, but you may check this.

1 Like