Hello,
I’m using an IMX390 camera and found that h264 encoding is particularly high on the cpu load. When doing 8 camera encoding, the cpu load is as high as 150%.
How can I reduce cpu load? Optimize gstreamer code or use other encoding api?
You can use nvv4l2camerasrc to capture the frame data into NVMM buffer directly. By default the plugin supports UYVY. Please try this patch and rebuild the plugin to support YUY2: Macrosilicon USB - #5 by DaneLLL
Hello,
Thanks for your reply. Because I need to record the exposure time from api.So nvv4l2camerasrc does not apply to me. Is there any other way? And what is the expected cpu load?
And I have tested that 8 imx490 record need 250% cpu load.It is too high to use.
Hi,
The optimization is to eliminate the memory copy. Could you check if you can move your custom code into nvv4l2camerasrc plugin? It is open source and you can customize it to include your code.
It still too high for me.Maybe i can use lower bitrate,but if there is a better way?l4t-multimedia?and what is the expected effect with l4t-multimedia?
And I would to use v4l2cuda sample on jetson_multimedia_api with userptr mode and zerocopy.This sample seems to be more suitable for my project,but I need to add encode h264 code into this sample. Which sample should I refer to?
Hi,
For video encoding, the data has to be in NvBuffer, so it is better to use nvv4l2camerasrc or refer to 12_camera_v4l2_cuda. Your pipeline is optimal. The CPU usage should be much lower when comparing to software encoder like x264enc
Hi,
According to Nvv4l2h264enc latency and preset-level - #17 by DaneLLL, I run 12_camera_v4l2_cuda + NvVideoEncoder to encode IMX390 and IMX490 camera image.
The imx390 camera plays the encoded video well, but the imx490 camera does not.
This is IMX490 encoded video :
Hi,
I have read the nvv4l2camerasrc plugin code,it is hard to move my code into nvv4l2camerasrc.Is there any way to push v4l2 buffer or memory to encoding with pipeline like: