As the title says, on the Xavier NX I am encoding jpegs using the MMAPI example code, specifically NvJPEGEncoder::encodeFromBuffer() using an NvBuffer initialized as V4L2_PIX_FMT_YUV420M.
I pass a “full range” YCbCr representation to encodeFromBuffer(buffer, JCS_YCbCr, …). My YCbCr values were obtained from an RGB test pattern that I transformed to YCbCr 420 (tried with multiple methods, including libyuv).
Later, when I decode a JPEG file created with this method, it is apparent that the minimum RGB values are now 15 or 16 in areas that were supposed to be 0, and the maximum values are ~235 to 240, rather than the expected 255.
From this experiment, it looks like the range of my YCbCr values is being clamped to [16,235] or [16,240] (depending on channel, of course). I have been looking for documentation to see what the expected YUV input range to the HW JPEG encoder is, but so far I have been unsuccessful.
Is the YUV range configurable, or at least documented somewhere?
Any help would be appreciated!
It is limited range [16,235]. Could you send image data in limited range to JPEG encoder?
Yes, in principle I can transform my YCbCr values from “full range” to “studio range” [16, 235]. I have tested this in the following way:
- Transform RGB image to YCbCr.
- Compress YCbCr from [0, 255] to [16, 235]
- Save JPEG using NvJPEGEncoder::encodeFromBuffer(), transfer the resulting .jpg file to another machine
- Load the JPEG image in OpenCV, which gives me an RGB image. So I transform the RGB image to YCrCb with OpenCV, undo the scaling by transforming [16, 235] to [0, 255], and then transform back from YCrCb with OpenCV.
- Compare the rescaled JPEG in RGB space to the original RGB image
Using these steps, I see errors in the range [-2, 8] across the individual R, G and B channels, which seems in line with the expected quantization error of the range scaling. (My test image is a gradient with 8-pixel wide strips, so I chroma subsampling errors are minimal; saving and loading JPEGs with OpenCV typically produce errors in the range [-1, 1]).
So while this works, I am not comfortable producing non-standard JPEG files that require additional scaling after loading them.
Is this really my only option? Other than commercial software JPEG encoders for the Jetson platform, what are my options for producing standards-compliant JPEG files?
I am really hoping for just a quick function call to set the HW JPEG encoder to accept full-range YCbCr values … :-)
We would need to reproduce the issue and do further investigation. Please share steps for reproducing it by running
05_jpeg_encode. So that we can have teams follow the steps to reproduce the issue first.
As a quick solution please use software encoder such as cv2.imwrite() or gstreamer jpenenc plugin.
I believe I have isolated the problem. I have misread the comment in the example on NvJPEGEncoder::setCropRect() to mean that I have to call it every time before I compress an image. I guess the correct interpretation is that you should call it every time if you want to crop the image before compressing it.
So I have been calling NvJPEGEncoder::setCropRect(0, 0, width, height) all this time, which probably triggered some video stream component to perform the (trivial) crop. What I did not know, or expect, was that this cropping step automatically clamps the Y range to [16, 235], and the U,V range to [16, 240].
If I simply call setCropRect(0, 0, 0, 0), or not call it at all, then there is no implicit clamping of the YUV values, and I end up with a JPEG file with the expected full-range YCbCr values!
But I only discovered this by working through the 05_jpeg_encode example again slowly, and I only did that because you asked for an example, so thanks for guiding me through the process!
ps: note 05_jpeg_encode will clamp YUV to [16, 235] unless the --encode-buffer switch is used!
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.