i have a program that encodes a video input to jpg and mp4 files, i’m using NvJpegEncoder and NvVideoEncoder for encoding. The video input is converted with NvVideoConverter to a V4L2_PIX_FMT_YUV420M NvBuffer, this is then feed to NvVideoEncoder and NvJpegEncoder(encodeFromFd). In the image appended you can see the color shift, left is the video and right the image from the NvJpegEncoder. Looks like fog in the image, see also the histogram attached.
the quality parameter would only cause steps in the histogram, what i see is a compressed histogram.
I have found a way to correct the values. The input YUV value range [16, 235] vs [0, 255] (see Numerical approximationsYUV - Wikipedia) of the NvJpegEncoder and the NvVideoEncoder are different.
NvBufferColorFormat_YUV420_ER is needed by NvJpegEncoder to get the colors right, maybe someone from NVIDIA @dusty_nv can confirm or help with a better solution than the following.
Unpleasant solution(because of an additional NvBufferTransform):
Hi,
Is the source format in NvBufferColorFormat_YUV420[16,235] or NvBufferColorFormat_YUV420_ER[0,255]?
Form the description, the issue looks like if we feed [0,255] to NvJpegEncoder, it is automatically converted to[16,235].
yes the source format is NvBufferColorFormat_YUV420. Would be interesting what the NvJpegEncoder source Format is, NvBufferColorFormat_YUV420 or NvBufferColorFormat_YUV420_ER?
Hi,
Could you share how to do histogram comparison? So that we can try to generate JPEGs through 05_jpeg_encode and compare the before/after histograms.
Looks like [0, 255] is converted to [16, 235], and [16,235] is converted to more limited range. We will need to investigate this further. Please share us the tool and steps.
I also think there is something wrong with the ranges. I have reproduced the error in my application with the sample applications. The test file is a any 1920x1080 jpg file input.jpg. First i decode the jpg with 06_jpeg_decode to yuv(V4L2_PIX_FMT_YUV420M) and then encode it to jpg with 05_jpeg_encode, then i copy multiple test.yuv files to video.yuv to get a video, after this step i encode the video.yuv with 01_video_encode to a h264 video, ffmpeg is used to convert the h264 to a mp4 file. Then you have to open the file test.jpg and video.mp4 with a web browser in 2 taps and switch between the taps, then you can see the difference in the shadows and highlights.
I have also checked the input formatfor 05_jpeg_encode and 01_video_encode, which are both V4L2_PIX_FMT_YUV420M.
./jpeg_decode num_files 1 input.jpg test.yuv
../05_jpeg_encode/jpeg_encode test.yuv 1920 1080 test.jpg -f 1 -crop 0 0 1920 1080
cat tes.yuv >> video.yuv (multiple(30-40) times to get a video, my video.yuv is 306MB big)
../01_video_encode/video_encode video.yuv 1920 1080 H264 video.h264
ffmpeg -i video.h264 -vcodec copy -an video.mp4
I hope you can reproduce my problem with this steps!
Hi,
Please share a JPEG or YUV420 file so that we can see difference in using software JPEG encoder and hardware JPEG encoder. The test pattern generated through videotestsrc may not be able to show the deviation. We have tried hardware video encoding and don’t observe difference.
Hi,
Attach the JPEG I get on r32.3.1/TX2 hw_jpegenc.zip (1.4 MB)
I don’t see significant differencein comparing to original.jpg. Please help take a look. Probably some deviation should be noticed but I miss it.
thanks, beside the missing crop, the colors are identical. As mentioned above i only see the difference between the jpg and the mp4. Can you encode a mp4 and share it?