Jetson TX2 H265 decoder issue

Hello!

I need to decode H265 stream to image in YUV format, so I use /tegra_multimedia_api/samples/00_video_decode to do this.
This sample can decode H265 file to YUV video file successfully, But I need to use camera to capture H265 stream and perform real-time decoding. When I decode H265 I frame, I can got YUV image, when inputting H265 P frame, the output image is abnormal.
How to solve this problem?

Please share your h265 stream so that we can reproduce the error.

Hi DaneLLL,

My h265 stream is captured by a camera, when I capture one frame, I call the read_decoder_input_chunk( ctx->in_file[current_file], buffer) function to read one frame h265 stream, and then decode to the YUV data. So I dont’t store the h265 stream to a file.

The /tegra_multimedia_api/samples/00_video_decode sample can read all frame from input file to buffer, and decode successfully. But how can I read one frame to decode, and then read next frame to decode?

// Read encoded data and enqueue all the output plane buffers.
    // Exit loop in case file read is complete.
    i = 0;
    while (!eos && !ctx.got_error && !ctx.dec->isInError() &&
           i < ctx.dec->output_plane.getNumBuffers())
    {
        struct v4l2_buffer v4l2_buf;
        struct v4l2_plane planes[MAX_PLANES];
        NvBuffer *buffer;

        memset(&v4l2_buf, 0, sizeof(v4l2_buf));
        memset(planes, 0, sizeof(planes));

        buffer = ctx.dec->output_plane.getNthBuffer(i);
        if ((ctx.decoder_pixfmt == V4L2_PIX_FMT_H264) ||
                (ctx.decoder_pixfmt == V4L2_PIX_FMT_H265))
        {
            if (ctx.input_nalu)
            {
                read_decoder_input_nalu(ctx.in_file[current_file], buffer, nalu_parse_buffer,
                        CHUNK_SIZE, &ctx);
            }
            else
            {
                read_decoder_input_chunk(ctx.in_file[current_file], buffer);
            }
        }
        if (ctx.decoder_pixfmt == V4L2_PIX_FMT_VP9)
        {
            ret = read_vp9_decoder_input_chunk(&ctx, buffer);
            if (ret != 0)
                cerr << "Couldn't read VP9 chunk" << endl;
        }
        v4l2_buf.index = i;
        v4l2_buf.m.planes = planes;
        v4l2_buf.m.planes[0].bytesused = buffer->planes[0].bytesused;

        if (ctx.input_nalu && ctx.copy_timestamp && ctx.flag_copyts)
        {
          v4l2_buf.flags |= V4L2_BUF_FLAG_TIMESTAMP_COPY;
          ctx.timestamp += ctx.timestampincr;
          v4l2_buf.timestamp.tv_sec = ctx.timestamp / (MICROSECOND_UNIT);
          v4l2_buf.timestamp.tv_usec = ctx.timestamp % (MICROSECOND_UNIT);
        }

        if (v4l2_buf.m.planes[0].bytesused == 0)
        {
            if (ctx.bQueue)
            {
                current_file++;
                if(current_file != ctx.file_count)
                {
                    continue;
                }
            }
            if(ctx.bLoop)
            {
                current_file = current_file % ctx.file_count;
                continue;
            }
        }
        // It is necessary to queue an empty buffer to signal EOS to the decoder
        // i.e. set v4l2_buf.m.planes[0].bytesused = 0 and queue the buffer
        ret = ctx.dec->output_plane.qBuffer(v4l2_buf, NULL);
        if (ret < 0)
        {
            cerr << "Error Qing buffer at output plane" << endl;
            abort(&ctx);
            break;
        }
		printf("v4l2_buf.m.planes[0].bytesused =%d\n",v4l2_buf.m.planes[0].bytesused);
        if (v4l2_buf.m.planes[0].bytesused == 0)
        {
            eos = true;
            cout << "Input file read complete" << endl;
            break;
        }
        i++;
    }

Hi xyj,
Please refer to

static int
read_decoder_input_nalu(ifstream * stream, NvBuffer * buffer,
        char *parse_buffer, streamsize parse_buffer_size, context_t * ctx)

The function makes each buffer be one complete frame. You should integrate it into your code.

Or can you provide H265 video stream real-time processing sample programs and documents? I need input one frame to decode to YUV image, and then input next frame to decode to YUV image.
i.e. 1.H265 I frame data → video decoder → YUV data
2.H265 P frame data → video decoder → YUV data
3.H265 P frame data → video decoder → YUV data

7.H265 I frame data → video decoder → YUV data
8.H265 P frame data → video decoder → YUV data
9.H265 P frame data → video decoder → YUV data

Hi xyj,
You should store h265 stream to a file so that we can reproduce the issue with 00_video_decode.

Hi DaneLLL,

Yes, I store h265 stream to a file one by one frame. but 00_video_decode can’t decode the H265 P frame by only one frame.
I think the decoder don’t store the frame header info.

Hi xyj,
P frames depend on previous I(or P) frames. If you want every frame to be independently decode-able, you should record in all I frames.

Hi DaneLLL,

We need to deal the video in real-time, and if we record in all I frames, the H265 encode will lose its significance.
In my application scenario, can you provide a feasible solution?

Thanks!

Hi xyj,
We have delivered samples demonstrating all HW functions. Please realize you have to integrate into your own case. If you see issue in h265 decoding, you can store a h265 stream and attach it. Once we got the h265 stream, we will run 00_video_decode to reproduce the issue.

$ ./video_decode H265 ~/<_your_h265_stream_>.h265

Hi DaneLLL,

This is the h265 stream file.
1234.rar (78.4 KB)

Hi xyj,

We tried your h265 stream with below command, the video looks good.

$ ./video_decode H265 1234.h265

Could you description more detail about your issue and reproduce steps?