I built and ran the 05_jpeg_encode sample from jetson_multimedia_api with an image of size 2464 x 2056. When I run it with the option –encode-buffer I get the following times:
NvMMLiteBlockCreate : Block : BlockType = 1
----------- Element = jpenenc -----------
Total units processed = 300
Average latency(usec) = 5690
Minimum latency(usec) = 5416
Maximum latency(usec) = 23638
-------------------------------------
and with –encode-fd:
NvMMLiteBlockCreate : Block : BlockType = 1
----------- Element = jpenenc -----------
Total units processed = 300
Average latency(usec) = 15772
Minimum latency(usec) = 15552
Maximum latency(usec) = 29052
-------------------------------------
According to the documentation from encodeFromBuffer it is supposed to be slower than encodeFromFd although here it is almost 3 times faster. Any ideas why?
This actually speeds up the encoding process quite a bit. The time of –encode-buffer drops from ~5,7 ms to ~5 ms and for –encode-fd from ~15 ms to ~5,8 ms. So, encoding from hardware buffer is a lot faster but still slower than encoding from software buffer, is this to be expected?
Hi,
The capability of hardware encoder is identical in the two cases. The minor deviation is from moving the buffer data. --encode-buffer is slightly better but CPU usage is higher. Please run sudo tegrastats and check.
Either mode should be good and you can pick the one which fits your use-case.