How can the `nvv4l2decoder` element support dynamic resolution modification?

Set enable-frame-type-reporting=true no impact for decoder

gst-launch-1.0 filesrc location="your misc.h264 path" ! h264parse ! nvv4l2decoder enable-frame-type-reporting=true ! nv3dsink

Can you dump the h264 code stream that causes the stuck?

Hello, the attachment stopped after using the resolution switch in the pipeline below.

appsrc ! h264parse ! nvdec ! appsink  # enable-frame-type-reporting=FALSE

The timing of video saving is to obtain the H264 data from the device and first write it to the local file, and then push the data into GstPipeline_appsrc

At the same time, the test pipeline gst-launch-1.0 filesrc location="your misc.h264 path" ! h264parse ! nvv4l2decoder enable-frame-type-reporting=true ! nv3dsink can be played
test.zip (22.9 MB)

Use the below command dump frame information.

ffprobe -i test.h264 -show_frames -of xml > out.xml

The resolution of frame 224 has changed.

223: <frame media_type="video" stream_index="0" key_frame="0" pkt_duration="40000" pkt_duration_time="0.033333" pkt_pos="11961810" pkt_size="53243" width="1920" height="1440" pix_fmt="yuv420p" pict_type="P" coded_picture_number="227" display_picture_number="0" interlaced_frame="0" top_field_first="0" repeat_pict="0" chroma_location="left"/>
224: <frame media_type="video" stream_index="0" key_frame="1" pkt_duration="40000" pkt_duration_time="0.033333" pkt_pos="12015053" pkt_size="35515" width="1920" height="1080" pix_fmt="yuv420p" pict_type="I" coded_picture_number="228" display_picture_number="0" interlaced_frame="0" top_field_first="0" repeat_pict="0" chroma_location="left"/>

nvv4ldecoder could be handle it normally.

As your mentioned before, If save to yuv can work normally, Maybe the reason of stucks is appsink

Hello, when I tested the pipeline below, no matter how I cut the resolution, it would not cause the pipeline to be blocked, so I have been unable to determine which element is the problem.
appsrc ! h264parse ! appsink

Just parse h264 stream ? no decoder ?

When the resolution becomes larger, does your appsink need to realloc memory according to the new video width and height?

I didn’t do anything in the code just read the data in appsink parsing the resolution


static GstFlowReturn appsink_new_sample(GstElement* sink , gpointer* data) {
    // std::cout << "appsink sample" << endl;
    GstSample* sample;
    GstBuffer* buf = NULL;
    sample = gst_app_sink_pull_sample(GST_APP_SINK(sink));
    if (gst_app_sink_is_eos(GST_APP_SINK(sink))) {
        g_print("EOS received in Appsink********\n");
    }
    if (sample) {
        // std::cout << "appsink 有数据到来" << endl;
        buf = gst_sample_get_buffer(sample);

        auto caps = gst_sample_get_caps(sample);
        if (caps) {
            GstStructure* structure = gst_caps_get_structure(caps , 0);
            gint width , height;
            gst_structure_get_int(structure , "width" , &width);
            gst_structure_get_int(structure , "height" , &height);
            // g_print("Width: %d, Height: %d\n" , width , height);
            USER_LOG_INFO("appsink new sample width:%d , height:%d" , width , height);
            // gst_caps_unref(caps);
        }


        gst_sample_unref(sample);
        return GST_FLOW_OK;
    }

    return GST_FLOW_ERROR;
}

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.