Jetpack 4.3 jpeg compression performance suddenly becomes much worse

Hello,

As of updating to jetpack 4.3 on my AGX, I now see a significant degradation in performance of jpeg compression using the jetson_multimedia_api.
I’m using the provided jpeg encoder and encodeFromFd: https://docs.nvidia.com/jetson/l4t-multimedia/classNvJPEGEncoder.html
Printing the time spent doing the encode reveals the following discontinuity:

jpeg compression took 4022233 nano seconds
jpeg compression took 3972983 nano seconds
jpeg compression took 4682584 nano seconds
jpeg compression took 4114430 nano seconds
jpeg compression took 4560082 nano seconds
jpeg compression took 4102493 nano seconds
jpeg compression took 1008823608 nano seconds
jpeg compression took 1007201268 nano seconds
jpeg compression took 1006916238 nano seconds
jpeg compression took 1009333316 nano seconds
jpeg compression took 1006695249 nano seconds
jpeg compression took 1009864555 nano seconds
jpeg compression took 1007323804 nano seconds
jpeg compression took 1007725814 nano seconds
jpeg compression took 1008991703 nano seconds
jpeg compression took 1008434629 nano seconds

The performance discontinuity occurs after several hundred successful encodes. I am encoding into an application allocated buffer of sufficient size to not require a new allocation.

This is the output from the jpegenc profiling:

----------- Element = jpenenc -----------
Total units processed = 644
Average latency(usec) = 4276
Minimum latency(usec) = 3802
Maximum latency(usec) = 9855

----------- Element = jpenenc -----------
Total units processed = 645
Average latency(usec) = 4276
Minimum latency(usec) = 3802
Maximum latency(usec) = 9855

----------- Element = jpenenc -----------
Total units processed = 646
Average latency(usec) = 4277
Minimum latency(usec) = 3802
Maximum latency(usec) = 9855

----------- Element = jpenenc -----------
Total units processed = 647
Average latency(usec) = 5826
Minimum latency(usec) = 3802
Maximum latency(usec) = 1006935

----------- Element = jpenenc -----------
Total units processed = 648
Average latency(usec) = 7371
Minimum latency(usec) = 3802
Maximum latency(usec) = 1006935

I believe this has to do with some kind of resource leak.

My application allocates 3 buffers on the first frame per camera. One YUV buffer to receive frames from my IEGLOutputStream, one additional YUV buffer for jpeg compression, and one ABGR buffer for raw frame processing.
I decided to test allocating and releasing these buffers for each frame, and I was no longer able to create buffers after 646 frames. My suspicion is that the jpeg encoder also allocates hardware buffers and thus once it is unable to allocate a buffer, it falls back on CPU encoding.
Since I only allocate the three buffers during normal operation, what are other possible ways of leaking these buffers?
How many hardware buffers are available on the AGX?

I’ve now confirmed that this is very likely to be in the jpeg encoder.
If I create and destroy hardware buffers on each iteration for just raw processing, I do not have any issues. It is only when I add the jpeg encoder that I start running into the out of hardware buffer resource error that I am having issues with.

Hi,
Please apply the prebuilt lib and try again:
https://elinux.org/Jetson/L4T/r32.3.x_patches
[GSTREAMER]streaming using jpegenc halts after a short delay

1 Like