This might be a stupid question but I must first state I am very new to opencv.
I am working on a project that needs to encode a frame into jpeg before sending it to a remote client in real time. I need to send a color image and a depth map, both 720p, which is essentially 3+1 channel of 720p data.
I used ncurses to measure and calculate the used time for the operations.
The encoding time is about 15ms per channel in 720p on Jetson TX1. That counts about 60ms in total, which cannot meet the requirement.
And then I googled a bit and read that libjpeg-turbo can speed things up.
When I do the same thing in a desktop virtual machine with opencv compiled with libjpeg-turbo, I get about 7ms for encoding a 3-channel 720p image and 4ms for grayscale 720p, which is decent enough for the application.
(and I couldn’t get opencv to work with libjpeg and not libjpeg-turbo
(… which means I couldn’t make a fair comparison without changing libopencv4tegra on Jetson. :(
Anyway, my question is that, is libopencv4tegra using libjpeg-turbo or just libjpeg?
If it is using libjpeg, I guess I will have to try using compiled opencv instead of libopencv4tegra.
But if it is using libjpeg-turbo, I guess it is only because the raw computing power difference between ARM core and x86 core that made the result so different…?