Setup nvconv dynamically for USB web camera

• Hardware Platform (Jetson / GPU) GeForce RTX 4070
• DeepStream Version 6.2
• TensorRT Version 8.5.1.7
• NVIDIA GPU Driver Version (valid for GPU only) 535

Hello! I have successfully created my custom Deepstream pipeline that is running using my USB web camera as a source. For that I took deepstream-faciallandmark-app from TAO toolkit and modified a code in such a way:

// this is a struct offered by deepstream-faciallandmark-app for carrying out elements for source input preparation
typedef struct _DsSourceBin
{
    GstElement *source_bin;
    GstElement *source_v4l2; // for usb web camera (v4l2 protocol)
    GstElement *uri_decode_bin; // for offline videos
    GstElement *vidconv;
    GstElement *nvvidconv;
    GstElement *capsfilt;
    gint index;
}DsSourceBinStruct;

// this is how I create pipeline elements for converting my web-camera format to NV12
static bool
create_source_bin_for_v4l2 (DsSourceBinStruct *ds_source_struct, gchar *uri) {
  
  printf("in create_source_bin_for_v4l2 = %s\n", uri);
  gchar bin_name[16] = { };
  GstCaps *caps_uyvy = NULL, *caps_nv12 = NULL;

  ds_source_struct->nvvidconv = NULL;
  ds_source_struct->capsfilt = NULL;
  ds_source_struct->source_bin = NULL;
  ds_source_struct->uri_decode_bin = NULL;

  g_snprintf (bin_name, 15, "source-bin-%02d", ds_source_struct->index);
  /* Create a source GstBin to abstract this bin's content from the rest of the
   * pipeline */
  ds_source_struct->source_bin = gst_bin_new (bin_name);
  ds_source_struct->source_v4l2 = gst_element_factory_make ("v4l2src", "v4l2-source");
  ds_source_struct->vidconv = gst_element_factory_make ("videoconvert", "vidconv");
  ds_source_struct->nvvidconv = gst_element_factory_make ("nvvideoconvert", "nvconv");

  gchar *device = strdup(uri + 7);
  g_object_set (G_OBJECT (ds_source_struct->source_v4l2), "device", device, "num-buffers", 900, NULL);

  caps_uyvy = gst_caps_new_simple ("video/x-raw", "format", G_TYPE_STRING, "YUY2",
      "width", G_TYPE_INT, 640, "height", G_TYPE_INT,
      480, "framerate", GST_TYPE_FRACTION,
      30, 1, NULL);
  caps_nv12 = gst_caps_new_simple ("video/x-raw", "format", G_TYPE_STRING, "NV12", NULL);

  gst_bin_add_many (GST_BIN (ds_source_struct->source_bin), ds_source_struct->source_v4l2,
                    ds_source_struct->vidconv, ds_source_struct->nvvidconv, NULL);

  gst_element_link_filtered(ds_source_struct->source_v4l2, ds_source_struct->vidconv, caps_uyvy);
  gst_element_link_filtered(ds_source_struct->vidconv, ds_source_struct->nvvidconv, caps_nv12);

  gst_caps_unref(caps_uyvy);
  gst_caps_unref(caps_nv12);

  return true;
}

For some reason the code above by default engaging the camera with the config of YUY2 format and 640x480 resolution. I know my camera is capable to produce much better quality stream, because here is the output of this command v4l2-ctl -d /dev/video0 --list-formats-ext

ioctl: VIDIOC_ENUM_FMT
	Type: Video Capture

	[0]: 'MJPG' (Motion-JPEG, compressed)
		Size: Discrete 640x480
			Interval: Discrete 0.033s (30.000 fps)
		Size: Discrete 640x360
			Interval: Discrete 0.033s (30.000 fps)
		Size: Discrete 1280x720
			Interval: Discrete 0.017s (60.000 fps)
			Interval: Discrete 0.033s (30.000 fps)
		Size: Discrete 1920x1080
			Interval: Discrete 0.033s (30.000 fps)
	[1]: 'YUYV' (YUYV 4:2:2)
		Size: Discrete 640x480
			Interval: Discrete 0.033s (30.000 fps)
		Size: Discrete 640x360
			Interval: Discrete 0.033s (30.000 fps)

Also I know that v4l2src element can be run in terminal command like so:

gst-launch-1.0 v4l2src ! 'image/jpeg,width=1920,height=1080,framerate=30/1'

My question is - how can I force v4l2src element to use camera with MJPG, 1280x720, 30 fps in C code? Is there any deepstream app example that does it? The reason I am struggling to do it without a reference - I do not see a corresponding MJPEG format in Deepstream. Do I need to write just "MJPEG" as a format in caps? And I believe in case of using MJPEG stream, I still need to have this structure with nvconv from MJPEG to NV12 and nvvidconv. Is my understanding correct? Thank you very much!

There is no update from you for a period, assuming this is not an issue any more. Hence we are closing this topic. If need further support, please open a new one. Thanks.
please refer to the faq.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.