Decoding mjpeg from USB3.0 camera

Hi,
Trying to use two USB cameras (1920x1080 60fps) with deepstream5.0 with no lock:

  • when using deepstream-app as is it opens the camera in YUV format and the video is choppy with low fps not usable

  • looking the forums I manage to HW decode mjpeg with this pipeline (60fps per camera):
    gst-launch-1.0 v4l2src device=/dev/video0 io-mode=2 ! ‘image/jpeg,width=1920,height=1080’ ! nvjpegdec ! ‘video/x-raw(memory:NVMM)’ ! fpsdisplaysink video-sink=fakesink sync=false text-overlay=false -v
    v4l2src device=/dev/video1 io-mode=2 ! ‘image/jpeg,width=1920,height=1080’ ! nvjpegdec ! ‘video/x-raw(memory:NVMM)’ ! fpsdisplaysink video-sink=fakesink sync=false text-overlay=false -v

  • Trying to change the deepstream-app according to this pipeline fails to link to the streammux

  • I need some info on how to integrate it with deepstream. or if you can provide a pipeline that also connects to streammux, nvinfer and saves each camera in h265 that would be great.

  • I also tried this pipeline but getting only 30fps recording?
    gst-launch-1.0 v4l2src device=/dev/video1 io-mode=2 ! ‘image/jpeg, width=1920, height=1080, framerate=60/1’ ! nvjpegdec ! ‘video/x-raw, format=I420’ ! nvvidconv ! ‘video/x-raw(memory:NVMM), format=NV12’ ! nvv4l2h265enc ! h265parse ! filesink location=test.h265 -e

Thanks!

Hi,
In DeepStream SDK, you can use nvv4l2decoder for MJPEG decoding. Please try

gst-launch-1.0 v4l2src device=/dev/video0 io-mode=2 ! ‘image/jpeg,width=1920,height=1080’ ! nvv4l2decoder mjpeg=1 ! nvoverlaysink

If you can run the pipeline and achieve target frame rate, please adapt it to deepstream-app.

Thanks for the prompt replay Dane.

2 issues:

  1. I get only 30 fps with fpsdisplaysink:
    gst-launch-1.0 v4l2src device=/dev/video0 io-mode=2 ! 'image/jpeg,width=1920,height=1080,framerate=60/1' ! nvv4l2decoder mjpeg=1 ! fpsdisplaysink video-sink=nvoverlaysink text-overlay=false

when recording fpsdisplaysink actually shows 60fps but the recorded video is 30fps:
gst-launch-1.0 v4l2src device=/dev/video0 io-mode=2 ! 'image/jpeg,width=1920,height=1080,framerate=60/1' ! nvv4l2decoder mjpeg=1 ! nvv4l2h265enc ! h265parse ! fpsdisplaysink video-sink='filesink location=test3.h265' text-overlay=false -v

  1. there is a big RGB offset. colorspace issue?

Hi,
Raw h265 stream may not contain correct fps information. Please try

gst-launch-1.0 v4l2src device=/dev/video0 io-mode=2 ! 'image/jpeg,width=1920,height=1080,framerate=60/1' ! nvv4l2decoder mjpeg=1 ! nvv4l2h265enc ! h265parse ! matroskamux ! filesink location=a.mkv

There is an issue of decoding YUV422 MJPEG. Please try the prebuilt lib:
https://elinux.org/Jetson/L4T/r32.4.x_patches
[GSTREAMER]Prebuilt lib for decoding YUV422 MJPEG through nvv4l2decoder

So I’ve implemented this pipeline in the deepstream-app and getting 60fps both camera!
just the issue with the RGB offset?
Thanks!

Here is the deepstream-app code deepstream_source_bin.c:

      NVGSTDS_INFO_MSG_V ("create_camera_source_bin\n");

      GstCaps *caps = NULL, *caps1 = NULL, *convertCaps = NULL;

      gboolean ret = FALSE;

      switch (config->type) {

        case NV_DS_SOURCE_CAMERA_CSI:

          bin->src_elem =

              gst_element_factory_make (NVDS_ELEM_SRC_CAMERA_CSI, "src_elem");

          g_object_set (G_OBJECT (bin->src_elem), "bufapi-version", TRUE, NULL);

          g_object_set (G_OBJECT (bin->src_elem), "maxperf", TRUE, NULL);

          break;

        case NV_DS_SOURCE_CAMERA_V4L2:

          bin->src_elem =

              gst_element_factory_make (NVDS_ELEM_SRC_CAMERA_V4L2, "src_elem");

          bin->cap_filter1 =

              gst_element_factory_make (NVDS_ELEM_CAPS_FILTER, "src_cap_filter1");

          if (!bin->cap_filter1) {

            NVGSTDS_ERR_MSG_V ("Could not create 'src_cap_filter1'");

            goto done;

          }

          caps1 = gst_caps_new_simple ("image/jpeg",

              "width", G_TYPE_INT, config->source_width, "height", G_TYPE_INT,

              config->source_height, "framerate", GST_TYPE_FRACTION,

              config->source_fps_n, config->source_fps_d, NULL);

          break;

        default:

          NVGSTDS_ERR_MSG_V ("Unsupported source type");

          goto done;

      }

      if (!bin->src_elem) {

        NVGSTDS_ERR_MSG_V ("Could not create 'src_elem'");

        goto done;

      }

      bin->cap_filter =

          gst_element_factory_make (NVDS_ELEM_CAPS_FILTER, "src_cap_filter");

      if (!bin->cap_filter) {

        NVGSTDS_ERR_MSG_V ("Could not create 'src_cap_filter'");

        goto done;

      }

      caps = gst_caps_new_simple ("video/x-raw","format", G_TYPE_STRING, "NV12",

              "width", G_TYPE_INT, config->source_width, "height", G_TYPE_INT,

              config->source_height, "framerate", GST_TYPE_FRACTION,

              config->source_fps_n, config->source_fps_d, NULL);

      if (config->type == NV_DS_SOURCE_CAMERA_CSI) {

        GstCapsFeatures *feature = NULL;

        feature = gst_caps_features_new ("memory:NVMM", NULL);

        gst_caps_set_features (caps, 0, feature);

      }

      if (config->type == NV_DS_SOURCE_CAMERA_V4L2) {

        GstCapsFeatures *feature = NULL;

        GstElement *jpg_dec = gst_element_factory_make ("nvv4l2decoder", "jpeg-decoder");

        if (!jpg_dec) {

          NVGSTDS_ERR_MSG_V ("Failed to create 'jpg_dec'");

          goto done;

        }

        g_object_set (G_OBJECT (bin->cap_filter1), "caps", caps1, NULL);

        g_object_set (G_OBJECT (bin->src_elem), "io-mode", 2, NULL);

        g_object_set (G_OBJECT (jpg_dec), "mjpeg", 1, NULL);

        gst_bin_add_many (GST_BIN (bin->bin), bin->src_elem, bin->cap_filter1, jpg_dec,

            NULL);

      

        NVGSTDS_LINK_ELEMENT (bin->src_elem, bin->cap_filter1);

        NVGSTDS_LINK_ELEMENT (bin->cap_filter1, jpg_dec);

        NVGSTDS_BIN_ADD_GHOST_PAD (bin->bin, jpg_dec, "src");

Thanks! color issue solved with the prebuild

1 Like

Hello, I’ve been trying to implement this change with the code above but can’t seem to build correctly. I am editing /opt/nvidia/deepstream/deepstream-5.0/sources/apps/apps-common/src/deepstream_source_bin.c and running sudo make clean; sudo make in /opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream-app.

It is compiling the modified code but the deepstream-app binary doesn’t seem to be reflecting this; running it doesn’t do anything differently. Any help would be appreciated, thanks!

Hi anjankar,

Please help to open a new topic at DeepStream SDK forum: https://forums.developer.nvidia.com/c/accelerated-computing/intelligent-video-analytics/deepstream-sdk/15