Cannot Capture Video Stream Using External USB Camera on OpenCV

I am having some trouble getting a video stream from an external usb camera. The output below is from running lsusb -t.

nvidia@nvidia-desktop:~/Documents/MC/build$ lsusb -t
/: Bus 02.Port 1: Dev 1, Class=root_hub, Driver=tegra-xusb/3p, 5000M
/: Bus 01.Port 1: Dev 1, Class=root_hub, Driver=tegra-xusb/4p, 480M
|__ Port 2: Dev 25, If 0, Class=Hub, Driver=hub/4p, 12M
|__ Port 1: Dev 30, If 0, Class=Video, Driver=uvcvideo, 12M
|__ Port 1: Dev 30, If 1, Class=Video, Driver=uvcvideo, 12M
|__ Port 1: Dev 30, If 2, Class=Audio, Driver=snd-usb-audio, 12M
|__ Port 1: Dev 30, If 3, Class=Audio, Driver=snd-usb-audio, 12M
|__ Port 1: Dev 30, If 4, Class=Audio, Driver=snd-usb-audio, 12M
|__ Port 3: Dev 29, If 1, Class=Human Interface Device, Driver=usbhid, 12M
|__ Port 3: Dev 29, If 0, Class=Human Interface Device, Driver=usbhid, 12M
nvidia@nvidia-desktop:~/Documents/MC/build$

Is your USB camera connected through an external hub ? 12M seems very low for uvcvideo.
Do you have other high speed devices connected to such hub ? or to Nano ? You may try without more than your cam, mouse and keyboard.

The only other device that is connected to the hub is a USB mouse and keyboard. However the USB hub itself does give some issues from time to time so I am not sure if this is the culprit for that low speed. There are times when the USB dongle would not be detected by the hub.

Note that if this hub is USB1, that would explain this.
Can you connect directly into one of the Nano devkit USB connectors ?

The TX2 has only one USB port so I can’t connect a mouse and keyboard as well as the USB camera to it at the same time. I’m not sure if that is what you mean by connecting to the devkit USB connector.

Sorry, I’m helping on many topics and got confused.
If you’re using TX2 devkit, you would use OTG micro-usb port with adapter cable to connect your USB1 (or more but may not work faster than 12M) hub for mouse and keyboard, and keep full size USB connector for your camera, so that it can do USB2 (at least, not used TX2 with recent releases).

Okay so I need to get an OTG to micro-usb before we can continue trying to solve the problem?

You would just use a micro-usb to female USB adpater cable such as this one, and connect your hub with mouse and keyboard that dont require high speed into devkit micro-USB OTG connector, and keep the full size USB host connector of TX2 devkit for higher speed USB camera.

My bad I had the same cable in mind but I may have worded it incorrectly. So the mouse and keyboard connects to the micro-usb and the Camera connects to the USB A port. So to confirm, in order to move forward with this problem I must first get the micro-usb to USB A cable?

If you need to order such cable and want to try now, it should be possible to try with keyboard from remote host.
Can you connect the cam adapter into full size USB connector of TX2 devkit without hub so no mouse nor kb, and from another host connect with ssh into TX2 having a monitor connected to Jetson ?

I will purchase the cable tomorrow so that I could use the board directly instead of through a another PC. I don’t mind trying to SSH directly to the Jetson, however, I am not sure how to do this since my HOST PC is a Windows PC. The Jetson is connected to a router via LAN cable.

I managed to connect to the TX2 from the windows cmd terminal however I am not sure how I am going to use the GUI of the board.

From a Windows host, puTTY should be ok for trying.

You would need a Jetson connected monitor, GUI session started on Jetson with mouse and keyboard to run a terminal and run:

echo $DISPLAY

that should give something like :0 or :1.
Then disconnect your hub with mouse and keyboard, and connect only your cam into full size USB TX2 devkit’s connector.

Now from windows host, connect with puTTY into TX2 (ssh port should be 22), log in and run:

# Adjust to what it showed above 
export DISPLAY=:0
#export DISPLAY=:1

# Now any X request would be sent to local X server rendering on that display (local display)

Then from same terminal retry these commands.

Okay I’ll go download putty

So its good news and bad news. The good news is that the camera did manage to open a stream. The bad news is that the frame rate was horrible.

gst-launch-1.0 v4l2src device=/dev/video2 ! image/jpeg,width=1280,height=720, framerate=30/1 ! jpegparse ! nvv4l2decoder mjpeg=1 ! nvvidconv ! xvimagesink

The line above was the first to open the stream. The frame rate was bad but it still displayed the video stream. On the Host PC there was some warnings in the terminal window.

WARNING: from element /GstPipeline:pipeline0/GstXvImageSink:xvimagesink0: A lot of buffers are being dropped.
Additional debug info:
gstbasesink.c(2902): gst_base_sink_is_too_late (): /GstPipeline:pipeline0/GstXvImageSink:xvimagesink0:
There may be a timestamping problem, or this computer is too slow.

The only other line to open a display was

gst-launch-1.0 v4l2src device=/dev/video2 ! image/jpeg,width=1280,height=720, framerate=30/1 ! jpegparse ! nvjpegdec ! ‘video/x-raw(memory:NVMM),format=I420’ ! nvvidconv ! xvimagesink

The frame rate in the stream produced by the line above I would equate to a still image being updated every 1 minute. It also gave the similar if not the same warning as the previous line.

The last line to somewhat work was

gst-launch-1.0 v4l2src device=/dev/video2 ! video/x-h264,width=1280,height=720,framerate=30/1 ! h264parse ! nvv4l2decoder ! nvvidconv ! xvimagesink

The only thing is that a stream did not open.

I believe the 20 message limit should be over by now.

Try adding sync=false to the end of your pipelines.

I tried adding the line above to the pipeline but I’m getting some errors. Not sure if I’m adding it in correctly.

nvidia@nvidia-desktop:~$ gst-launch-1.0 v4l2src device=/dev/video1 ! image/jpeg,width=1280,height=720, framerate=30/1 ! jpegparse ! nvv4l2decoder mjpeg=1 ! nvvidconv ! xvimagesink ! sync=false

(gst-launch-1.0:9433): GStreamer-CRITICAL **: 22:49:31.466: gst_element_link_pads_filtered: assertion ‘GST_IS_BIN (parent)’ failed
ERROR: pipeline could not be constructed: syntax error.
nvidia@nvidia-desktop:~$

nvidia@nvidia-desktop:~$ gst-launch-1.0 v4l2src device=/dev/video1 io-mode=2 ! video/x-raw, format=YUY2, width=1280, height=720, framerate=30/1 ! xvimagesink ! sync=false

(gst-launch-1.0:9418): GStreamer-CRITICAL **: 22:49:02.403: gst_element_link_pads_filtered: assertion ‘GST_IS_BIN (parent)’ failed
ERROR: pipeline could not be constructed: syntax error.

nvidia@nvidia-desktop:~$ gst-launch-1.0 v4l2src device=/dev/video1 ! video/x-raw, format=YUY2, width=1280, height=720, framerate=30/1 ! xvimagesink ! sync=false

(gst-launch-1.0:9415): GStreamer-CRITICAL **: 22:48:36.955: gst_element_link_pads_filtered: assertion ‘GST_IS_BIN (parent)’ failed
ERROR: pipeline could not be constructed: syntax error.

sync=false is an option of the sink, so :

gst-launch-1.0 v4l2src device=/dev/video1 ! video/x-raw, format=YUY2, width=1280, height=720, framerate=30/1 ! xvimagesink sync=false

Sir you are a wizard. This solved the framerate issue when I launched it directly from the terminal. I apologize for the delay in my responses as things had gotten hectic during the weekend.

Is there anyway to launch this pipeline directly from OpenCV code? I’ve been trying and have been getting some errors.
This is the code I tried.

Mat frame;
//— INITIALIZE VIDEOCAPTURE
int deviceID = 2; // 0 = open default camera
int apiID = cv::CAP_ANY; // 0 = autodetect default API
const char* gst = “gst-launch-1.0 v4l2src device=/dev/video1 ! image/jpeg,width=1280,height=720, framerate=30/1 ! jpegparse ! nvv4l2decoder mjpeg=1 ! nvvidconv ! xvimagesink sync=false”;
// open selected camera using selected API
VideoCapture cap(gst);
// check if we succeeded
if (!cap.isOpened()) {
cerr << “ERROR! Unable to open camera\n”;
return -1;
}
//— GRAB AND WRITE LOOP
cout << “Start grabbing” << endl
<< “Press any key to terminate” << endl;
for (;;)
{
// wait for a new frame from camera and store it into ‘frame’
cap.read(frame);
// check if we succeeded
if (frame.empty()) {
cerr << “ERROR! blank frame grabbed\n”;
break;
}
// show live and wait for a key with timeout long enough to show images
imshow(“Live”, frame);
if (waitKey(5) >= 0)
break;
}
// the camera will be deinitialized automatically in VideoCapture destructor
return 0;
}

I managed to launch the pipeline from OpenCV using the line below

VideoCapture cap(“v4l2src device=/dev/video1 ! image/jpeg,width=1280,height=720, framerate=30/1 ! jpegparse ! nvv4l2decoder mjpeg=1 ! nvvidconv ! appsink drop=1”, cv::CAP_GSTREAMER);

Now I’m trying to get the stream to be produced in color

EDIT: I managed to get the stream in color using the line below

VideoCapture cap(“v4l2src device=/dev/video1 ! image/jpeg,width=1280,height=720, framerate=30/1 ! jpegparse ! nvv4l2decoder mjpeg=1 ! nvvidconv ! video/x-raw,format=BGRx ! videoconvert ! video/x-raw,format=BGR ! queue ! appsink drop=1”, cv::CAP_GSTREAMER);