Jpeg encoder generate green picture

Hi, i have a h264 file, i need to decode the video and save the frames to jpg.
I simply modified samples/02_video_dec_cuda/videodec_main.cpp, add some code below dec->capture_plane.dqBuffer, like this:

        /* Dequeue a valid capture_plane buffer that contains YUV BL data */
        if (dec->capture_plane.dqBuffer(v4l2_buf, &dec_buffer, NULL, 0))

       // encode jpg
        NvJPEGEncoder* jpegenc = NvJPEGEncoder::createJPEGEncoder("jpenenc");
        unsigned long out_buf_size = 1920 * 1080 * 3 / 2;
        unsigned char *out_buf = new unsigned char[out_buf_size];
        int ret = jpegenc->encodeFromFd(dec_buffer->planes[0].fd, JCS_YCbCr, &out_buf, out_buf_size);
        if (ret < 0) cerr << "Error: jpeg encode fail" << endl;
        static int ii = 0;
        std::ostringstream oss;
        oss << "frame" << ii++ << ".jpg";
        std::ofstream ofs(oss.str());
        ofs.write((char *)out_buf, out_buf_size);
        delete[] out_buf;
        delete jpegenc;

        /* If converter is created, send the decoded data to converter,
           otherwise, just return the buffer to converter capture plane */
        if (ctx->conv)

It works well for the sample video /usr/src/jetson_multimedia_api/data/Video/sample_outdoor_car_1080p_10fps.h264, but generates pure green pictures for my test video.
I checked decoder output, no problem.

test video:
test.h264 (3.9 MB)

The decoded buffer is in YUV420 blocklinear format. You should put the code in conv0_capture_dqbuf_thread_callback() to encode buffers in YUV420 pitchlinear.

I thought jpeg encoder only works on blocklinear format (not the other way around)? because i checked jpeg_encode_main.cpp, it converts from pitchlinear to blocklinear before encoding to jpg.
Anyway, i tried move the code to conv0_capture_dqbuf_thread_callback(), it crashed as expected. (yeah, crash on pitchlinear buffer, which i came across multiple times.)

 (gdb) bt
#0  0x0000007fb23d8f18 in  () at /usr/lib/aarch64-linux-gnu/tegra/
#1  0x0000007fb23d9df8 in  () at /usr/lib/aarch64-linux-gnu/tegra/
#2  0x0000007fb7cb6208 in jpegTegraEncoderCompress () at /usr/lib/aarch64-linux-gnu/tegra/
#3  0x0000007fb7c80354 in jpeg_write_raw_data () at /usr/lib/aarch64-linux-gnu/tegra/
#4  0x000000555556ce1c in NvJPEGEncoder::encodeFromFd(int, J_COLOR_SPACE, unsigned char**, unsigned long&, int) ()
#5  0x000000555555dde8 in conv0_capture_dqbuf_thread_callback(v4l2_buffer*, NvBuffer*, NvBuffer*, void*) ()
#6  0x0000005555599574 in NvV4l2ElementPlane::dqThread(void*) ()
#7  0x0000007fb7f89088 in start_thread (arg=0x7faa33de1f) at pthread_create.c:463
#8  0x0000007fb6a374ec in thread_start () at ../sysdeps/unix/sysv/linux/aarch64/clone.S:78

Besides, when i add the code after dec->capture_plane.dqBuffer, it works fine for sample_outdoor_car_1080p_10fps.h264, but not my test video. I think it’s video pixel format relative, but i can’t find any difference between the two samples. (both nv12, 1920x1080).

i think i made a mistake. i checked the video pixel format with ffmpeg, it’s not the same. my video is yuvj420p, sample_outdoor_car_1080p_10fps.h264 is yuv420p.

Input #0, h264, from '.\sample_outdoor_car_1080p_10fps.h264':=0/0
    Duration: N/A, bitrate: N/A
         Stream #0:0: Video: h264 (High), yuv420p(progressive), 1920x1080 [SAR 1:1 DAR 16:9], 10 fps, 10 tbr, 1200k tbn, 20 tbc

Input #0, h264, from 'test.h264':  0KB vq=    0KB sq=    0B f=0/0
    Duration: N/A, bitrate: N/A
        Stream #0:0: Video: h264 (Main), yuvj420p(pc, progressive), 1920x1080, 25 fps, 25 tbr, 1200k tbn, 50 tbc

however, dec->capture_plane.getFormat() gives V4L2_PIX_FMT_NV12M for both videos.

Please modify the capture plant format:

        ret = ctx->conv->setCapturePlaneFormat((ctx->out_pixfmt == 1 ?
                                                    V4L2_PIX_FMT_NV12M :

to V4L2_NV_BUFFER_LAYOUT_BLOCKLINEAR. And apply jpeg encoding in conv0_capture_dqbuf_thread_callback(). See if this works.

i did what you say, now it throws error somewhere else.

root@inspur-desktop:/usr/src/jetson_multimedia_api/samples/02_video_dec_cuda# ./video_dec_cuda /usr/src/jetson_multimedia_api/data/Video/sample_outdoor_car_1080p_10fps.h264 H264 --disable-rendering --input-nalu -o 1
NvMMLiteOpen : Block : BlockType = 261 
NVMEDIA: Reading vendor.tegra.display-size : status: 6 
NvMMLiteBlockCreate : Block : BlockType = 261 
Starting decoder capture loop thread
Video Resolution: 1920x1080
pixel format: 842091854, nv12: 842091854, yuv420p: 842091865
libv4l2_nvvidconv (0):(802) (INFO) : Allocating (14) OUTPUT PLANE BUFFERS Layout=1
libv4l2_nvvidconv (0):(818) (INFO) : Allocating (14) CAPTURE PLANE BUFFERS Layout=1
[ERROR] (NvBuffer.cpp:169) <Buffer> Could not map buffer 0, plane 0
[ERROR] (NvV4l2ElementPlane.cpp:720) <conv0> Capture Plane:Error during setup
Error in converter capture plane setup
Error in query_and_set_capture
Exiting decoder capture loop thread
[ERROR] (NvV4l2ElementPlane.cpp:178) <dec0> Output Plane:Error while DQing buffer: Broken pipe
Error DQing buffer at output plane
Decoder is in error
App run failed

it seems ctx->conv->capture_plane.setupPlane() failed

The decoded buffer is in NvBufferColorFormat_NV12_ER, which is not supported in NvVideoConverter. Please call NvBufferTransform() to convert to NvBufferColorFormat_NV12 or NvBufferColorFormat_YVU420, and then call encodeFromFd().

It works fine now after transforming to NvBufferColorFormat_NV12.

1 Like