How to improve jeston NX video encode performance?

I want encode 1080P/120fps video in real time.but I only get 1080P/80fps.
my method is use gstreamer with opencv.the main encode code is :
char *gst_write = “appsrc ! video/x-raw, format=BGR ! queue ! videoconvert ! video/x-raw,format=RGBA ! nvvidconv ! omxvp9enc ! matroskamux ! filesink location=file.mkv”;
so my question is:
1,It is the most encode perfromance of nx that encode 1080P/80fps. Because the limit of hardware limit.

2, I thik the hardware encode only use hardware encode engine, Is that right? could i use gpu cuda core to impove encode performance?

It should hit constraint of CPU capability. In the gstreamer string, we need to convert BGR data to RGBA and copy to NVMM buffer. These operations take significant CPU usage. If the data of 1920x1080 frame is in NVMM buffer, the data copy is not required and can achieve 1920x1080p120. You may run the two commands for comparison:

$ gst-launch-1.0 videotestsrc is-live=1 ! video/x-raw,width=640,height=360,framerate=120/1 ! nvvidconv ! 'video/x-raw(memory:NVMM),format=NV12,width=1920,height=1080' ! nvv4l2h264enc ! fpsdisplaysink text-overlay=0 video-sink=fakesink sync=0 -v
$ gst-launch-1.0 videotestsrc is-live=1 ! video/x-raw,width=1920,height=1080,framerate=120/1 ! nvvidconv ! 'video/x-raw(memory:NVMM),format=NV12,width=1920,height=1080' ! nvv4l2h264enc ! fpsdisplaysink text-overlay=0 video-sink=fakesink sync=0 -v

The first command is to copy 640x360 data to NVMM buffer and do conversion through hardware engine to have 1920x1080 NVMM buffer. This can achieve 120fps.
The second command is to copy 1920x1080 data to NVMM buffer. Performance is capped in copying buffers through CPU.

Please execute sudo nvpmodel -m 2 and sudo jetson_clocks. If this cannot bring required performance, it is constraint of hardware.

Thank you for your reply. I follow your suggestion, and the first command could achieve 120fps.
I have another question.when our use gstreamer,Is the hardware encode engine doing video compression,but not the CUDA computing core? Could I use the cuda core to do video encode?

It is hardware encoding engine doing video compression. We would suggest use the hardware engine so that you can use GPU for other tasks, such as deep learning inference.

If you would like to do video encoding on GPU cores, would need to implement the function through CUDA programming.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.