Use nvjpegdec slower than jpegdec on Jetson Nano

Information as applicable to my setup.

• Hardware Platform (Jetson / GPU) Jetson Nano B01
• DeepStream Version: 6.0.1
• JetPack Version (valid for Jetson only): 4.6.6

Hi, I use a USB camera Hikvision: so I use v4l2src with input type: jpeg 25fps. I try using decode by: jpegdec and nvjpegdec. I don’t know why ouput fps with nvjpegdec < jpegdec. Can you give me information or documents about it? (with nvjpegdec I add a nvvideoconvert after it)

This is my component with my USB Camera:

There is only jpegdec pipeline. What is the nvjpegdec pipeline?

What is the sink of your pipeline? How did you measure the output FPS?

Sorry, this is detail infor:

Pipeline:
jpegdec


Test by ffmpeg: 25 fps

Duration: 00:00:14.04, start: 0.000000, bitrate: 4066 kb/s
    Stream #0:0(und): Video: h264 (Constrained Baseline) (avc1 / 0x31637661), yuv420p(bt709, progressive), 1280x720, 4063 kb/s, SAR 1:1 DAR 16:9, 25 fps, 250 tbr, 2500 tbn, 5k tbc (default)

nvjpegdec
Test by ffmpeg: 4.19 fps

Duration: 00:00:12.64, start: 0.000000, bitrate: 726 kb/s
    Stream #0:0(und): Video: h264 (Constrained Baseline) (avc1 / 0x31637661), yuv420p(bt709, progressive), 1280x720, 725 kb/s, SAR 1:1 DAR 16:9, 4.19 fps, 250 tbr, 2500 tbn, 5k tbc (default)

This is my jpegdec code (comment videoconvert if testing nvjpegdec)

#include <gst/gst.h>
#include <signal.h>

static GstElement *pipeline = NULL;

void handle_sigint(int sig)
{
    if (pipeline)
    {
        g_print("Caught SIGINT, sending EOS...\n");
        gst_element_send_event(pipeline, gst_event_new_eos());
    }
}

int main(int argc, char *argv[])
{
    GstElement *source, *capsfilter, *decoder,
        *videoconvert, *nvvidconv,
        *encoder, *parser, *muxer, *sink;
    GstCaps *caps;
    GstBus *bus;
    GstMessage *msg;
    GstStateChangeReturn ret;

    // Initialize GStreamer
    gst_init(&argc, &argv);

    // Set up the SIGINT handler
    signal(SIGINT, handle_sigint);

    // Create the elements
    source = gst_element_factory_make("v4l2src", "source");
    capsfilter = gst_element_factory_make("capsfilter", "capsfilter");
    decoder = gst_element_factory_make("jpegdec", "decoder");
    videoconvert = gst_element_factory_make("videoconvert", "videoconvert");
    nvvidconv = gst_element_factory_make("nvvidconv", "nvvidconv");
    encoder = gst_element_factory_make("nvv4l2h264enc", "encoder");
    parser = gst_element_factory_make("h264parse", "parser");
    muxer = gst_element_factory_make("qtmux", "muxer");
    sink = gst_element_factory_make("filesink", "sink");

    // Check if all elements are created
    if (!source || !capsfilter || !decoder || 
	!videoconvert || 
    !nvvidconv || 
    !encoder || !parser || !muxer || !sink)
    {
        g_printerr("Not all elements could be created.\n");
        return -1;
    }

    // Create the empty pipeline
    pipeline = gst_pipeline_new("video-pipeline");

    if (!pipeline)
    {
        g_printerr("Pipeline could not be created.\n");
        return -1;
    }

    // Set the element properties
    g_object_set(source, "device", "/dev/video0", NULL);
    caps = gst_caps_new_simple("image/jpeg",
                               "width", G_TYPE_INT, 1280,
                               "height", G_TYPE_INT, 720,
                               "framerate", GST_TYPE_FRACTION, 25, 1,
                               NULL);
    g_object_set(capsfilter, "caps", caps, NULL);
    gst_caps_unref(caps);
    g_object_set(sink, "location", "output.mp4", NULL);

    // Build the pipeline
    gst_bin_add_many(GST_BIN(pipeline), source, capsfilter, decoder,
                      nvvidconv, 
                     videoconvert,
                     encoder, parser, muxer, sink, NULL);
    if (gst_element_link_many(source, capsfilter, decoder,
                              videoconvert,
                               nvvidconv, 
                              encoder, parser, muxer, sink, NULL) != TRUE)
    {
        g_printerr("Elements could not be linked.\n");
        gst_object_unref(pipeline);
        return -1;
    }

    // Start playing
    ret = gst_element_set_state(pipeline, GST_STATE_PLAYING);
    if (ret == GST_STATE_CHANGE_FAILURE)
    {
        g_printerr("Unable to set the pipeline to the playing state.\n");
        gst_object_unref(pipeline);
        return -1;
    }

    // Wait until error or EOS
    bus = gst_element_get_bus(pipeline);
    msg = gst_bus_timed_pop_filtered(bus, GST_CLOCK_TIME_NONE, static_cast<GstMessageType>(GST_MESSAGE_ERROR | GST_MESSAGE_EOS));

    // Parse message
    if (msg != NULL)
    {
        GError *err;
        gchar *debug_info;

        switch (GST_MESSAGE_TYPE(msg))
        {
        case GST_MESSAGE_ERROR:
            gst_message_parse_error(msg, &err, &debug_info);
            g_printerr("Error received from element %s: %s\n", GST_OBJECT_NAME(msg->src), err->message);
            g_printerr("Debugging information: %s\n", debug_info ? debug_info : "none");
            g_clear_error(&err);
            g_free(debug_info);
            break;
        case GST_MESSAGE_EOS:
            g_print("End-Of-Stream reached.\n");
            break;
        default:
            // We should not reach here because we only asked for ERRORs and EOS
            g_printerr("Unexpected message received.\n");
            break;
        }
        gst_message_unref(msg);
    }

    GST_DEBUG_BIN_TO_DOT_FILE(
        GST_BIN(pipeline), GST_DEBUG_GRAPH_SHOW_ALL, "pipeline");

    // Free resources
    gst_object_unref(bus);
    gst_element_set_state(pipeline, GST_STATE_NULL);
    gst_object_unref(pipeline);
    return 0;
}

I’ve tried with JetPack 6.1 and DeepStream 7.1 on Orin NX.

The nvjpegdec is faster than jpegdec with local video file.

The pipelines are

GST_DEBUG=fpsdisplaysink:7 gst-launch-1.0 filesrc location=/opt/nvidia/deepstream/deepstream/samples/streams/sample_720p_mjpeg.mp4 ! qtdemux name=d d.video_0 ! nvjpegdec ! fpsdisplaysink text-overlay=false video-sink=fakesink signal-fps-measurements=true sync=false

and

GST_DEBUG=fpsdisplaysink:7 gst-launch-1.0 filesrc location=/opt/nvidia/deepstream/deepstream/samples/streams/sample_720p_mjpeg.mp4 ! qtdemux name=d d.video_0 ! jpegdec ! fpsdisplaysink text-overlay=false video-sink=fakesink signal-fps-measurements=true sync=false

Seems the nvjpegdec is slower than jpegdec only with live stream. We will investigate it.

Tks! If you have more infor about it. Please let me know =))

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.