I am trying to connect two ELP USB3.0 2MP (sony IMX291) cameras to the Jetson Nano. One works fine, but trying to read an image with the second one causes a VIDIOC_STREAMON: No space left on device
error. I’ve read on this forum that this message cryptically indicates that the USB 3.0 bandwidth is used up, but both cameras streaming at, say 1080 and 60 fps should be well under the capacity of the USB 3.0 channel on the Nano. If this means that each camera hogs all the bandwidth despite using only a small amount, is there a fix? Thank you.
PS. Tried to get attention to this issue in a similar but older post, but perhaps due to its age got no response, so making a new topic.
Not sure, I don’t know this camera, but if it supports MJPG you may check this:
For other formats, even less sure but you may try other uvc driver quirks for bandwidth.
Thanks for your response! Running v4l2-ctl shows that the camera supports YUYV and MJPG. I also tried a solution found on a different forum, which recommended using the lines
sudo rmmod uvcvideo
sudo modprobe uvcvideo quirks=128
but got another error, this time it’s:
VIDIOC_STREAMON: Input/output error
It occurs when I try to read an image via opencv. I will try the driver modification, was just hoping to avoid that since I don’t feel too confident I won’t break something else. Is it possible another quirk might work for MJPG? Will do some research myself but this is my first time delving into driver intrinsics.
Hi,
If your USB amera supports other modes such as 1280x720p60, you may give it a try. Different USB cameras may require different bandwidth. Trying other modes can see if the camera driver requests for dynamic required bandwidth.
This is limit of Nano devkit:
If your USB camera does need the amount of bandwidth, you probably cannot have two working together.
Thanks for your answer. I tried setting the resolution to 640x480p60, and got the same error. There is absolutely no way that stream actually takes half the bandwidth, because if it did even one camera would not run at 1080p, but it does. Seems to me like the uvc driver is giving it more bandwidth than it actually needs. Do I have options besides changing the driver?
I haven’t tried, but not sure for 128. You may try 640 (see this). This wouldn’t work for MJPG, so you would try with YUYV only.
[EDIT; seen the link is old (2014) and may no longer be accurate. However, following a link from there you may try to modify and rebuild uvc module and use quirks 128 with this].
I did try 640 as well as 128, to no avail. Both result in the INPUT/OUTPUT error listed earlier, and I have to set the quirk to 512 to get the original error back. In addition, it seems that I can’t change the default camera encoding through opencv, using cap.get(cv2.CAP_PROP_FOURCC)
always returns 0.0 no matter what I try for the codec value.
Also, @DaneLLL, it seems as if changing the resolution of the camera by opencv actually fails. I set it to 640x480p, and the window size changes, which led me to believe the resolution does as well, but looking at it closely it still looks like 1080p, and using cap.get(cv2.CAP_PROP_FRAME_HEIGHT)
returns 1080.
Perhaps this camera does not support changing parameters through opencv?
You may also try configuring/building/installing opencv without libv4l and see if it changes:
-D WITH_LIBV4L=OFF
You would still be able to use V4L devices, but with a less restrictive interface, less errors/warnings.
Would that somehow allow opencv to change the camera’s parameters when it normally cannot?
I don’t think that’s all there is to it, but one issue may be the fact that opencv seems to always set the camera to YUYV and default resolution. I can change it via v4l2-ctl, but when I run my program that opens a video capture using opencv, everything reverts to defaults. I don’t understand why that causes the quirk=640 with uvcvideo to fail, since it should work with YUYV. But in any case, to eliminate the possibility that the camera with uncompressed video at 1080p and 60FPS isn’t actually eating half the entire bandwidth, I need to find a way to change to MJPG or lower resolution/frame rate and test it that way. Which is an issue because all the cameras I tested so far on the nano (3 different ones) all ignore the opencv set camera properties commands.
Hi,
You may work out a pipeline to launch the camera in gstreamer:
And apply it to OpenCV like:
Hi, and thanks again. I tried launching a preview in command line via gst-launch, but even though I specified a resolution of 640 by 480, I again got a 1080p image. So it seems my camera is ignoring gstreamer parameters as well, or at least not interpreting them properly. Is this by any chance fixable or are these cameras unworkable?
The exact line I used was:
gst-launch-1.0 v4l2src device=/dev/video4 ! ‘video/x-raw,format=YUY2,width=640,height=480,framerate=60/1’ ! nvvidconv ! ‘video/x-raw(memory:NVMM),format=NV12’ ! nvoverlaysink
after verifying that the camera supports this format, resolution and framerate.
Hi,
If you can get 640x480 frames in running v4l2-ctl command, you should be able to have same behavior in running gstreamer. We have verified this by using Logitech c920 and E-Con See3CAM CU135.
You may run the following command:
gst-launch-1.0 v4l2src device=/dev/video4 num-buffers=10 ! ‘video/x-raw,format=YUY2,width=640,height=480,framerate=60/1’ ! jpegenc ! multifilesink location=cap_%03d.jpg
And check the JPEG files. If you do see the JPEGs in 1920x1080, suggest you contact the camera vendor.
You’re right, the images are indeed 480p. However, when running with nvoverlaysink I still get what appears to be a 1080p tall overlay (the height takes up my entire 1080p monitor and the resolution does not look 480p).
Also, I cannot open a second test stream with the other camera: it fails with a Failed to allocate required memory.
error, which I am guessing means I’m out of bandwidth again.
Finally, I tried two different (USB 2.0) cameras and this time when I try to open two test streams I end up with the following error:
NvxBaseWorkerFunction[2575] comp OMX.Nvidia.std.iv_renderer.overlay.yuv420 Error -2147479552
Googling this error gives no information.
Hi,
Please check gstreamer user guide to get more detail about NVIDIA plugins:
https://developer.nvidia.com/embedded/dlc/l4t-accelerated-gstreamer-guide-32-2
The nvoverlaysink plugin scales frames to fit display resolution. For your case, you can use nveglglessink.
The problem was probably my eyes then. I just can’t tell 1080p from 480p when they’re the same window size apparently… Thank you for assisting me through this.
To summarize what I have so far: the USB 3.0 cameras, via command line using gstreamer, cannot open two streams simultaneously, even when using lowest resolution. One camera can, however, open a stream at highest resolution, which should take about 8 times the bandwidth of the lowest resolution. It’s clear from this that there is actually enough bandwidth for 2 low-resolution streams to play, but a single camera takes over half despite not using all of it. Further, uvcvideo quirk=128 does not work. Do I have any options except to modify the driver?
Hi,
Suggest you refer to the patch and add prints in uvc_video.c. Check if the USB camera assigns different bandwidth per resolutions.
May need other users to share guidance without modifying the driver.
Thank you, I will try that.