Is it possible to get 1080*720@>100fps with a USB cam and Opencv?

My current pipeline gets 640*360@~115fps. The fps is enough but the resolution leaves a bit to be desired.

The system is currently deployed so I don’t have physical access to it, but the video capture pipeline is essentially just a cv2.videoCapture object, cv2.cvt2gray, and then writing the frames. The nano has a number of other processes running on it which use some compute power.

If I switch the camera to 1080*720 what kind of fps should I expect to get?

You may give further details about how you are accessing frames from opencv.

However, if you are just using monochrome, you would try to use a gstreamer pipeline using nvvidconv for producing GRAY8, that should be accepted by opencv videoio. Convert your camera stream into I420 into NVMM memory with first nvvidconv and use a second one to convert it into GRAY8 in standard (video/x-raw) memory.

1 Like

Thanks for the quick reply. I am just using cap = cv2.videoCapture(‘/dev/video0’) and then cap.read() in a while loop.

Have you gotten this alternative pipeline to record video at 1080*720@>100fps? and if so do you have any code or suggestions for me? I haven’t used those tools before, aside from gstreamer briefly, so I don’t even know where to start.

You would have to tell what format/resolution/fps your camera provides for better advice.

# Need to do that only once
sudo apt install v4l-utils

# This would show what video0 camera can provide 
v4l2-ctl -d/dev/video0 --list-formats-ext

You are currently using V4L API for videoio. Using gstreamer API would leverage HW scaling/format conversion.

1 Like

Yes I am using V4L. Is there a way I can do cvt2gray/scaling with hw using gstreamer and run the output right into cv2.videoCapture? or would I have to use gstreamer to record the video and write the files?

I’ll run that command when I get my hands on the nano after its current data collection cycle is complete.

I really appreciate the help!

I might answer if you provide the output of last command I’ve suggested.
Basically, you would use a gstreamer pipeline converting you camera stream into I420 into NVMM memory with nvvidconv, then use another nvvidconv to convert into GRAY8 in standard memory (video/x-raw) and you would be able to read 1 channel frames in opencv with gstreamer API. Share the output of list-formats-ext for better advice.

1 Like

Ok thanks so much! I don’t have access to the nano at the moment because it’s being used for data collection. I can plug a camera into another computer and run it though, if that is the same?

Not sure…it would show what the other host driver can provide from this camera… but what your jetson driver may provide could be different in some cases… This would however give an idea about what this cam could provide. Not guaranteed, but as a first starting point…

1 Like

I was able to run the command and it does appear that the camera supports 1280*720, however when I tried changing the resolution using v4l2-ctl I just got invalid argument errors. I also checked my installation of opencv to make sure it is compatible with GST.

For GST do I basically just have to pass a GST pipeline into my cv2.VideoCapture object? i.e. cap = cv2.VideoCapture([GST pipeline])? Or are their more things I have to change first. Thanks so much for the help!

The output of that cmd says “ioctl: VIODOC_ENUM_FMT” and “pixel format: ‘MJPG’ (compressed)”. I’m not quite sure what the important line is

Please share the full output of :

v4l2-ctl -d/dev/video0 --list-formats-ext

so that one can know what it provides for better advice.

Finally got a pic of this. Thanks!

So it seems your camera only provides MJPG format, and has a 720p mode @120 fps, I assume this is what you want.
So first try to display your camera with:

gst-launch-1.0 v4l2src device=/dev/video0 io-mode=2 ! image/jpeg,width=1280,height=720, framerate=120/1, format=MJPG ! nvv4l2decoder mjpeg=1 ! nvvidconv ! videoconvert ! xvimagesink

If this doesn’t work, try adding jpegparse:

gst-launch-1.0 -v v4l2src device=/dev/video0 io-mode=2 ! image/jpeg,width=1280,height=720, framerate=120/1, format=MJPG ! jpegparse ! nvv4l2decoder mjpeg=1 ! nvvidconv ! videoconvert ! xvimagesink

If this doesn’t work, please post the output of previous command.

If you can see the video, you would measure the framerate that this pipeline can acheive:

gst-launch-1.0 -v v4l2src device=/dev/video0 io-mode=2 ! image/jpeg,width=1280,height=720, framerate=120/1, format=MJPG ! nvv4l2decoder mjpeg=1 ! fpsdisplaysink video-sink=fakesink text-overlay=false

If ok, add nvvidconv to BGRx:

gst-launch-1.0 -v v4l2src device=/dev/video0 io-mode=2 ! image/jpeg,width=1280,height=720, framerate=120/1, format=MJPG ! nvv4l2decoder mjpeg=1 ! nvvidconv ! video/x-raw, format=BGRx ! fpsdisplaysink video-sink=fakesink text-overlay=false

If ok, add videoconvert to BGR:

gst-launch-1.0 -v v4l2src device=/dev/video0 io-mode=2 ! image/jpeg,width=1280,height=720, framerate=120/1, format=MJPG ! nvv4l2decoder mjpeg=1 ! nvvidconv ! video/x-raw, format=BGRx ! videoconvert ! video/x-raw, format=BGR ! fpsdisplaysink video-sink=fakesink text-overlay=false

If ok, you would try such gstreamer pipeline for opencv to get color frames:

const char* gst_str="v4l2src device=/dev/video0 io-mode=2 ! image/jpeg,width=1280,height=720, framerate=120/1, format=MJPG ! nvv4l2decoder mjpeg=1 ! nvvidconv ! video/x-raw, format=BGRx ! videoconvert ! video/x-raw, format=BGR ! appsink";
cv::videoCapture cap(gst_str, cv::CAP_GSTREAMER);

Be aware that opencv videoio is not so fast on jetson, so you may have to choose other options.
If you intend to process monochrome, you would instead try such pipeline:

gst-launch-1.0 -v v4l2src device=/dev/video0 io-mode=2 ! image/jpeg,width=1280,height=720, framerate=120/1, format=MJPG ! nvv4l2decoder mjpeg=1 ! nvvidconv ! 'video/x-raw(memory:NVMM), format=I420' ! nvvidconv ! video/x-raw, format=GRAY8 ! videoconvert ! xvimagesink

gst-launch-1.0 -v v4l2src device=/dev/video0 io-mode=2 ! image/jpeg,width=1280,height=720, framerate=120/1, format=MJPG ! nvv4l2decoder mjpeg=1 ! nvvidconv ! 'video/x-raw(memory:NVMM), format=I420' ! nvvidconv ! video/x-raw, format=GRAY8 ! fpsdisplaysink video-sink=fakesink text-overlay=false

and if this works, you would open camera for reading one channel frames from opencv with:

const char* gst_str="v4l2src device=/dev/video0 io-mode=2 ! image/jpeg,width=1280,height=720, framerate=120/1, format=MJPG ! nvv4l2decoder mjpeg=1 ! nvvidconv ! video/x-raw(memory:NVMM), format=I420 ! nvvidconv ! video/x-raw, format=GRAY8 ! appsink";
cv::videoCapture cap(gst_str, cv::CAP_GSTREAMER);

Let us know what works or not.

Strangely enough the first time I tried the first command in your last response I got a live video feed to my display, and then after that it didn’t work again. I tried all the other commands and definitely something is still wrong. Here is the output from the second command with jpegparse


The commands seem to be hanging on something and not throwing. Thanks!

First note that I’ve added io-mode=2 option to v4l2src because in many cases this improves with reading from MJPG cams, but this might not be suitable to your case.
Same for jpegparse, if it worked without it, don’t use it. It may be slow @120 fps.

I do see errors:

nvbuf_utils: Invalid memsize=0
...

but I’m unable to advice really further. You may try to specify caps for nvv4l2decoder output or remove videoconvert that may be not mandatory and slow at 120 fps:

gst-launch-1.0 -v v4l2src device=/dev/video0 io-mode=2 ! image/jpeg, width=1280, height=720, framerate=120/1, format=MJPG ! nvv4l2decoder mjpeg=1 ! 'video/x-raw(memory:NVMM), format=NV12, framerate=120/1' ! nvvidconv ! xvimagesink