Issues with jpegenc

We experience issues with the hardware jpeg encoding in our application (virtual memory keeps increasing).

There seems to be a related post here: Libjpeg encode leak - Jetson & Embedded Systems / Jetson Xavier NX - NVIDIA Developer Forums Which was reported solved.

Just to test this I ran the sample:

./jpeg_encode /usr/src/jetson_multimedia_api/data/Picture/nvidia-logo.yuv 1920 1080 encode_test.jpg -quality 90 -f 1 -s 10000000 --dbg-level 3

Which at some point in time seems to crash:

[DEBUG] (NvJpegEncoder.cpp:117) :Succesfully encoded Buffer fd=967
[DEBUG] (NvJpegEncoder.cpp:65) jpenenc (0xaaaLb065475e0) destroyed
[DEBUG] (NvJpegEncoder.cpp:117) :Succesfully encoded Buffer fd=969
[DEBUG] (NvJpegEncoder.cpp:65) jpenenc (0xaaab065475e0) destroyed
[DEBUG] (NvJpegEncoder.cpp:117) :Succesfully encoded Buffer fd=971
[DEBUG] (NvJpegEncoder.cpp:65) jpenenc (0xaaab065475e0) destroyed
[DEBUG] (NvJpegEncoder.cpp:117) :Succesfully encoded Buffer fd=973
[DEBUG] (NvJpegEncoder.cpp:65) jpenenc (0xaaab065475e0) destroyed
[DEBUG] (NvJpegEncoder.cpp:117) :Succesfully encoded Buffer fd=975
[DEBUG] (NvJpegEncoder.cpp:65) jpenenc (0xaaab065475e0) destroyed
NVMAP_IOC_GET_FD failed: Bad address
NvRmStream: Channel submission failed (err=196623)
NvRmStream: Flush failed (err=196623)
JPEGEncFeedFrame 1796: Stream flush failed with error = 196623
Segmentation fault (core dumped)

This is on a Xavier NX with Jetpack 5.1.1 L4T: 35.3.1
Im not sure this example is related to my application crash, but I suppose it shouldnt crash.

[edit: the input image can be created using the encode sample:

sudo ./jpeg_decode num_files 1 …/…/data/Picture/nvidia-logo.jpg …/…/data/Picture/nvidia-logo.yuv


1 Like

Thanks for reporting it. We will set up and try to replicate the issue.


I also succeeded in reproducing the same issue that I see in my application code. If in line 204 in jpeg_encode_main.cpp the iteration around encodeFromFd is increased to a large amount, the virtual memory will (very) slowly fill until the application crashes.

for (int i = 0; i < 10000000000; ++i)
    ret = ctx.jpegenc->encodeFromFd(dst_dma_fd, JCS_YCbCr, &out_buf,
          out_buf_size, ctx.quality);
    if (ret < 0)
        cerr << "Error while encoding from fd" << endl;
        ctx.got_error = true;

I assume its allowed to use the encoder this way: that I dont need to create a jpeg encoder context for each image that I want to compress.
Also I assume that encodeFromFd does not / should not allocate output memory.

[edit: just to be clear: this is probably not the same issue as reported in the above post]
[edit: the above code, after a long while, crashes with:

PosixMemMap:84 mmap failed : Cannot allocate memory

the virtual memory increase can be monitored with htop

Can you give a timeline in which the above virtual memory leak will be addressed? Its delaying our work. It doesn’t seem too hard to replicate with the modified sample above?

Please apply the patch to 05 sample and try again:

@@ -225,6 +225,19 @@ cleanup:
     delete[] out_buf;
+    if(src_dma_fd != -1)
+    {
+        ret = NvBufSurf::NvDestroy(src_dma_fd);
+        src_dma_fd = -1;
+    }
+    if(dst_dma_fd != -1)
+    {
+        ret = NvBufSurf::NvDestroy(dst_dma_fd);
+        dst_dma_fd = -1;
+    }
     delete ctx.in_file;
     delete ctx.out_file;

it doesn’t help. Still memory leak. I just change perf loop to 100’000, and try to encode with

./jpeg_encode …/…/data/Picture/nvidia-logo.yuv 1920 1080 test.jpg --perf

and I got after 65000 times:

PosixMemMap:84 mmap failed : Cannot allocate memory

1 Like

I found one oddity encodeFromFd in the function jpeg_write_raw_data (&cinfo, NULL, 0);
I tried to log the start and end of this function with “std::cout”. Afterwards I looked in strace:

strace -o trace_info.txt ./jpeg_encode …/…/data/Picture/nvidia-logo.yuv 1920 1080 test.jpg --perf

Cut out the trace file for analysis:

tail -n 10000 trace_info.txt

jpeg_write_raw_data() always calls mmap() and never calls munmap

write(1, “start jpeg_write_raw_data\n”, 26) = 26
ioctl(3, _IOC(_IOC_READ|_IOC_WRITE, 0x4e, 0xf, 0x8), 0xfffff36feac0) = 0
mmap(NULL, 3538944, PROT_READ|PROT_WRITE, MAP_SHARED, 88, 0) = 0xffffc997820000

after a few cycles, the program tried mmap again, but it caught an error

mmap(NULL, 3538944, PROT_READ|PROT_WRITE, MAP_SHARED, 88, 0) = -1 ENOMEM (Cannot allocate memory)
close(88) = 0
write(2, “PosixMemMap:84 mmap failed : Can”…, 52) = 5

I also try to debug each loop and look at the command:

sudo pmap <pid_process> -X | grep dmabuf

on each cycle, the program create 3456K dmabuf or 3538944 (same number in mmap strace).
Could you explain, please? Is this a bug or do I need to delete some resources from libjpeg/cinfo?


We ill try to replicate the error and check further.

I forgot to mention that we have the same memory leak on NVIDIA Jetson Orin NX 16GB, Jetpack 5.1.1, L4T 35.3.1,
Steps to reproduce:

  • Change In file jetson_multimedia_api/samples/05_jpeg_encode/jpeg_encode_main.cpp
    “define PERF_LOOP 300” to “define PERF_LOOP 100000”
  • run 05 sample with

./jpeg_encode …/…/data/Picture/nvidia-logo.yuv 1920 1080 test.jpg --perf

  • open other terminal and run “htop”, the virtual memory increase to infinity

I’m facing same issue.
GStreamer’s nvjpegenc element has been affected similarly.

Facing the same issue on jetson xavier nx with jetpack 5.1.1.
Running the same encoder on the same m_dmabuf file descriptor:
m_JpegEncoder->encodeFromFd(m_dmabuf, JCS_YCbCr, &buffer, size, m_image_compression);
resulted in the mentionned PosixMemMap:84 mmap failed : Cannot allocate memory error.

It took roughly 65,000 iterations (more or less) to hit the issue. The same code running without a call to the encoder could hit over 290,000 iterations without issue.

Note that a workaround would be to feed nvjpegenc with system memory buffers instead of NVMM memory buffers. The bug wouldn’t byte from system memory buffers.

Is there an sample code showing how to work the API in that manner ?

Thank you!

I only saw difference when using nvjegenc element when using gstreamer framework. You may try it. See Nvargus-daemon stop by event overflow at about 64975 frames - #10 by Honey_Patouceul

Jetpack 5.1.2 is released. Please upgrade and give it a try. The issue shall be fixed.

I just upgraded to 5.1.2 and the memory issue is not seen so far after 245k iteration. Looks like this update fixed the problem :-)

Hi DaneLLL,
I also face this problem, and my jetpack version is 5.1.1. But I don’t want upgrade all jetpack in my board, can I only change


It does not work by only replacing Please completely upgrade to Jetpack 5.1.2.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.