NvMedia Image Encode performance

Hi,

We’ve encountered an unusual performance issue within our video encoding pipeline. The pipeline’s efficiency degrades over time, followed by a sudden processing spike, after which performance temporarily returns to baseline. This cyclical behavior causes undesirable fluctuations in our workflow.
Our initial troubleshooting points to the NvMediaIEPFeedFrame function. While this might not be the only place where it happens, it’s the first area we’ve identified that exhibits this behavior. We observed a gradual increase in frame processing time with each successive call.
To isolate the issue, we modified the reference nvmimg_enc app, measuring the execution time of NvMediaIEPFeedFrame.The chart illustrates the time taken by NvMediaIEPFeedFrame function call. We also do a minor change to read /dev/zero instead of real video file, just to avoid sharing large YUVs, but the behavior
is identical regardless of input file.


In our production environment, these processing spikes are more pronounced and frequent, likely due to running multiple encoding processes simultaneously.
I’ve attached the modified application and configuration files, so you can reproduce the issue.
source.zip (19.1 KB)

Could you please investigate the root cause of this performance degradation and the subsequent processing spikes?
Are there ways to mitigate this behavior and ensure more consistent pipeline performance?

Software Version
DRIVE OS Linux 5.2.6
DRIVE OS Linux 5.2.6 and DriveWorks 4.0
DRIVE OS Linux 5.2.0
DRIVE OS Linux 5.2.0 and DriveWorks 3.5
NVIDIA DRIVE™ Software 10.0 (Linux)
NVIDIA DRIVE™ Software 9.0 (Linux)
other DRIVE OS version
other

Target Operating System
Linux
QNX
other

Hardware Platform
NVIDIA DRIVE™ AGX Xavier DevKit (E3550)
NVIDIA DRIVE™ AGX Pegasus DevKit (E3550)
other

SDK Manager Version
1.9.1.10844
other

Host Machine Version
native Ubuntu 18.04
other

Could you clarify if the execution time of the NvMediaIEPFeedFrame function gradually increases beyond 3 ms or even higher over time?

With sample application I observe processing time increases and spikes up to 4ms.

Our pipeline which is much more complicated(4 parallel encodings + NvMedia Image LDC + some custom image processing) , in our case we observe that processing time(for the whole pipeline) starts at around 10ms then goes up to 20ms , and spike reaches 35ms).

It’s not feasible to provide you code for our whole pipleline, that’s why I tried to narrow it down, and the NvMediaIEPFeedFrame function is just a first function we looked into and it which shows same pattern, increasing processing time than spike, and back to baseline. This pattern is clearly visible on charts, but can be easily missed when looking at raw data, as the increase is slow.

I have reported this issue to our team. Please be aware that as there are no further releases planned for the Xavier generation, it’s possible that a resolution may not be available. Nevertheless, I’ll keep you updated on any progress.

Can you check if the spike occurs only at the I (Intra) frame interval?

It happens regardless of GOP size. We actually don’t use automatic I-Frame insertion(GOP size 0) in our pipeline. I just rerun the nvimg_enc reference app with EPGopLength set to 0, and the results are the same as there were with EPGopLength set to 30.

Thank you for providing additional details.

Could you please confirm the command line you executed to reproduce the issue? Was it the following?

./nvmimg_enc -cf h265.cfg

Additionally, could you provide details on how you are measuring the performance?

Yes, running the app with

./nvmimg_enc -cf h265.cfg

should be enough.

It measures time taken by NvMediaIEPFeedFrame with GetTimeMicroSec and prints it to stdout. Then I just plot the results.