My program consists of 4 parts.
- usb camera: put RGBA images into queueA.
- convert to YUV420: get queueA data convert and send to NvVideoEncoder. 7ms/1picture
- NvVideoEncoder: convert to h264 and send to queueB.
- save: get queueB and save to file.
The problem is in part2.
I am using 4 cameras, each has 20 fps. So there are 4 threads and 4 queueA. I find queueA larger and larger, it means part2 can not deal with 80 pictures each second.
Is it too slow? The size of my picture is 2448 * 2048.
There are a few suggestions:
1 Execute ‘sudo nvpmodel -m 0’ and ‘sudo jetson_clocks’
2 In 01_video_encode, there are an options:
--max-perf Enable maximum Performance
-hpt <type> HW preset type (1 = ultrafast, 2 = fast, 3 = medium, 4 = slow)
Please enable max-perf and set hardware preset type = ultrafast.
3 There is minor improvement to feed NvBufferColorFormat_NV12 to NvVideoEncoder. If you allocate buffers in NvBufferColorFormat_YUV420, please try NvBufferColorFormat_NV12.
If the performance is still not good, you may use tegrastats to check the overall system loading and find out the bottleneck.
I set HWPresettype = 1, now the encoder speed very fast.
But my argus sdk don’t have NvVideoEncoder::setMaxPerfMode. Maybe my sdk is too old.
My hardware is xavier, I install L4T Multimedia api using JetPack 4.1.1, Multimedia version is 31.1.
which sdk can I upgrade? And is it necessary to upgrade? How fast can it improve?
Looks lie you use r31.1. Please do ‘head -1 /etc/nv_tegra_release’ to get the release version.
r31.1 is developer preview release and not for production(not pass all SQA tests). Strongly recommend you upgrade to r32 releases. r32.3.1 is out of the oven. You may upgrade through sdkmanager.