Is nvjpeg used in the L4T MMAPI 12_camera_v4l2_cuda sample?

I’m looking at this in the MMAPI:

https://docs.nvidia.com/jetson/l4t-multimedia/l4t_mm_v4l2_cam_cuda_group.html

It decodes JPEG frames from a webcam however the sample code calls “jpeg_start_decompress” which is undocumented and exists as a symbol in /usr/lib/aarch64-linux-gnu/tegra/libnvjpeg.so:

objdump -T /usr/lib/aarch64-linux-gnu/tegra/libnvjpeg.so | grep jpeg_start_decompress

Please can some expert help me work out what is going on with the undocumented jpeg_start_decompress function?

I was reading the nvjpeg documentation:

And it says that you should use nvjpegDecode to decode an image but I don’t see this used ANYWHERE in the sample code.

BTW, I’m using the Jetson Nano and have CUDA installed but I have not explicitly installed any nvjpeg components.

The reason for my question is because nvjpegDecode allows you to set the output colourspace (I would like to just decode my JPEG to luminance) whereas I am having trouble getting the undocumented jpeg_start_decompress to decode the Y component only.

It is likely that you will receive faster / better answers in the sub-forum dedicate to Jetson Nano: https://devtalk.nvidia.com/default/board/371/jetson-nano/

Okay, if that is the case, please can you move this thread to that forum?

My question is 99% CUDA related. I only mentioned my platform for completeness.

I don’t know whether threads can be moved here. I certainly cannot move threads. In my experience, there is no strong adverse reaction to cross-posting on these forums.

FWIW, your question looks 99% nvjpeg related.

Okay thanks, I will post there too and see what answer I get.