Performance regression from omxh26Xenc to nvv4l2h26Xenc


I’m wondering if anyone else is experiencing a performance regression going from the gst-omx plugins to the gst-nvv4l2 plugins? I have a pipeline where I am interested in low latency, and from switching the plugin from omx to nvv4l2 I get almost 100% more latency in the pipeline, all other things remaining equal. Is there an option being set for one plugin by default which is not set in the other which can cause this kind of difference?


nvv4l2h265enc 260, 225, 220ms
nvv4l2h265enc maxperf-enable=true 220, 180, 200ms
nvv4l2h265enc bitrate=10000000 control-rate=0 maxperf-enable=true 230 210 250ms

omxh265enc 200, 150, 150ms
omxh265enc bitrate=10000000 control-rate=1 110 130 130ms

nvv4l2h264enc 160, 180 200ms
nvv4l2h264enc maxperf-enable=true 120, 160, 180ms

omxh264enc 100, 80, 120ms

The pipeline is:

nvvidconv ! video/x-raw(memory:NVMM),format=(string)I420 ! nvtee name=t1 ! nvvidconv !
video/x-raw(memory:NVMM) ! nvtee name=t2 ! nvv4l2h265enc bitrate=10000000 control-rate=0 maxperf-enable=true !
video/x-h265, stream-format=(string)byte-stream ! h265parse ! rtph265pay !
queue ! udpsink host= port=5000 sync=false async=false

Where the only change between tests is the encoder/protocol values. I’m aware that the pipeline is not optimal, but since it does not change between tests I do not think this matters.

I’m measuring the latency by pointing the camera at a screen with a stopwatch and the output display. Since the display is 60hz and the camera is 30hz I understand that there should be up to 50ms of error, which is what my measurements vary up to.

We have compared the plugins on TX2 and don’t see significant difference:

Should be same on Xavier. Please run your profiling with ‘sudo nvpmodel -m 0’ and ‘sudo jetson_clocks’ being executed.

I am seeing a similar issue, confirmed that nvpmodel is set to zero and jetson_clocks have been ran.

@ceptor01 Do you see this issue with the latest jetpack version? We have not upgraded recently but would still like a solution to this issue. We are currently using the omx plugins still.

Currently running on JetPack 4.3:

cat /etc/nv_tegra_release 
# R32 (release), REVISION: 3.1, GCID: 18186506, BOARD: t186ref, EABI: aarch64, DATE: Tue Dec 10 07:03:07 UTC 2019

Our main reason for wanting to switch is to be able to use the encoder in docker. As reported here: Degraded H.264 encoding quality with docker and OpenMAX the omx* encoders does not work well when used in a container.