I’ve built opencv on my desktop, with libv4l enabled, and it can get 1080p frames at 30fps. However, with libv4l enabled opencv on TX2, I can only get 30fps at 480P, 10fps at 720p and 5fps at 1080p.
I know it is a longshot, bit it is possible that you are capturing in an uncompressed format on TX2. You can check your camera’s supported formats on TX2 and Desktop to see if they are the same:
First, get your camera device id:
$ lsusb
Bus 002 Device 003: ID 0bda:0411 Realtek Semiconductor Corp.
Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 001 Device 007: ID 0bda:0129 Realtek Semiconductor Corp. RTS5129 Card Reader Controller
Bus 001 Device 006: ID 8087:0a2a Intel Corp.
Bus 001 Device 004: ID 1bcf:2b8a Sunplus Innovation Technology Inc.
Bus 001 Device 009: ID 08a0:0850
Bus 001 Device 005: ID 045e:07a5 Microsoft Corp. Wireless Receiver 1461C
Bus 001 Device 003: ID 0bda:5411 Realtek Semiconductor Corp.
Bus 001 Device 002: ID 046d:0a4d Logitech, Inc. G430 Surround Sound Gaming Headset
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
In my case, it is the 08a0:0850. Then run lsusb verbose mode with your device:
sudo lsusb -v -d <device-id>
The output is a little difficult to follow, but basically, it lists all formats (FORMAT_UNCOMPRESSED, FORMAT_MJPEG, …) an then the available resolutions on that format:
This FRAME_UNCOMPRESSED is for 1280x720 on my camera. Getting the framerate is a bit tricky, each dwFrameInterval is an available framerate in units of 100 nano seconds, so 10000000/dwFrameInterval will give you the framerate in fps. In my camera, for 720p raw video, the maximum framerate is 7.5 fps (dwFrameInterval 0).
Seems like it can support 1080P at 30fps at FRAME_MJPEG but not uncompressed form.
It is really confusing since I can get 1080P at 30fps on my desktop using the exact same USB camera.
When I used this camera on my desktop for the first time, I used pip-installed opencv and it can only produce 1080P frames at 5fps. After installed libv4l, and built opencv from source (enabling libv4l), it can produce 1080p at 30fps. But when it comes to TX2, even if I built opencv from source with libv4l enabled, I still cannot get 30fps.
I tried to use mmapi and failed to record a view using the sample number 10, it always return this error message:
Set governor to performance before enabling profiler
PRODUCER: Creating output stream
PRODUCER: Launching consumer thread
Failed to query video capabilities: Inappropriate ioctl for device
Opening in BLOCKING MODE
NvMMLiteOpen : Block : BlockType = 4
===== NVMEDIA: NVENC =====
NvMMLiteBlockCreate : Block : BlockType = 4
875967048
842091865
create video encoder return true
H264: Profile = 100, Level = 50
PRODUCER: Starting repeat capture requests.
Segmentation fault (core dumped)
From the lsusb output, it seems opencv is always reading the camera in uncompressed form, even if I tried some methods online to set to MJPEG, which can support 30fps.
Sample 12 can produce 30fps 1080P frames while selecting MJPEG form. The reason I want to use opencv is that it is easy to use and my gpu utilization is almost full. I’m not sure if using cuda for getting frames will affect my deep learning models or not.
So I’d prefer to get 30fps 1080P frames from opencv’s VideoCapture with MJPEG form. Any possible solutions?
This is complete CPU based implementation and may not work better then using MMAPI. Anyway, one more thing you can try is to run ‘sudo jetson_clocks’. It enables CPU running at max performance.
I already tried jetson_clocks and it does not improve the fps. The problem now seems to be how to read MJPEG frames from USB camera instead of uncompressed frames.