New nvstreammux + nvvideoconvert not working in python application

Hi everyone

- Hardware Platform (Jetson / GPU): RTX2080TI
- DeepStream Version: 6.2 (python bindings)
- TensorRT Version: 8.5.2-1+cuda11.8
- NVIDIA GPU Driver Version (valid for GPU only): 530.30.02
- Issue Type( questions, new requirements, bugs): Question

I have been trying to use the NEW NVSTREAMMUX. I have been following the recommendations in the documentation. Among the parameters that have been removed from the OLD NVSTREAMMUX are:

  • width: N/A; Scaling and color conversion support Deprecated.
  • height: N/A; Scaling and color conversion support Deprecated.

So, in cases where we are dealing with sources of different resolutions we find the following solution according to the documentation:

In this scenario, DeepStream recommends adding nvvideoconvert + capsfiler before each nvstreammux sink pad (enforcing same resolution and format of all sources connecting to new nvstreammux). This ensure that the heterogeneous nvstreammux batch output have buffers of same caps (resolution and format).

I have tried to create a pipeline with multiple sources, making use of the NVSTREAMMUX component as discussed in the solution, adding an nvvideoconvert in between, before the NVSTREAMMUX. However, I can’t get it to work. I add these elements in the cb_newpad, as shown below:

Best regards

def _cb_newpad(decodebin, pad, data):

    loggers['info'].info("Creating cb_newpad")

    caps = pad.get_current_caps()
    gststruct = caps.get_structure(0)
    gstname = gststruct.get_name()

    source_id, pipeline, streammux_video = data

    loggers['info'].info(f"gstname={gstname}")
    pad_name = "sink_%u" % source_id

    if gstname.find("video") != -1 and streammux_video is not None:

        queue_input = Gst.ElementFactory.make("queue", f"video_queue_input_1_{source_id}")
        if not queue_input:
            loggers['error'].error(f"Unable to create queue_input for source {source_id}")
            return
        
        pipeline.add(queue_input)
        decodebin.link(queue_input)
            
        videoconvert_input = Gst.ElementFactory.make("nvvideoconvert", f"videoconvert_input_{source_id}")
        if not videoconvert_input:
            loggers['error'].error(f"Unable to create videoconvert_input for source {source_id}")
            return
    
        pipeline.add(videoconvert_input)
        queue_input.link(videoconvert_input)
        
        queue_input_2 = Gst.ElementFactory.make("queue", f"video_queue_input_2_{source_id}")
        if not queue_input_2:
            loggers['error'].error(f"Unable to create queue_input for source {source_id}")
            return
        
        pipeline.add(queue_input_2)
        videoconvert_input.link(queue_input_2)
        
        srcpad = queue_input_2.get_static_pad("src")
        if not srcpad:
            loggers['error'].error(f"Unable to create src pad for source {source_id}")
            return

        sinkpad = streammux_video.get_request_pad(pad_name)
        if not sinkpad:
            loggers['error'].error(f"Unable to create sink for source {source_id}")
            return

        if srcpad.link(sinkpad) == Gst.PadLinkReturn.OK:
            loggers['info'].info(f"Decodebin {pad_name} linked to pipeline")
        else:
            loggers['info'].info(f"Failed to link decodebin: {pad_name}")

However, when I don’t add nvvideoconvert I don’t have any problem and I can run the pipeline normally, but without being able to rescale the incoming video. And I need this. I add the example without this element:

def _cb_newpad(decodebin, pad, data):

    loggers['info'].info("Creating cb_newpad")

    caps = pad.get_current_caps()
    gststruct = caps.get_structure(0)
    gstname = gststruct.get_name()

    source_id, pipeline, streammux_video = data

    loggers['info'].info(f"gstname={gstname}")
    pad_name = "sink_%u" % source_id

    if gstname.find("video") != -1 and streammux_video is not None:

        sinkpad = streammux_video.get_request_pad(pad_name)
        if not sinkpad:
            loggers['error'].error(f"Unable to create sink for source {source_id}")
            return

        if pad.link(sinkpad) == Gst.PadLinkReturn.OK:
            loggers['info'].info(f"Decodebin {pad_name} linked to pipeline")
        else:
            loggers['info'].info(f"Failed to link decodebin: {pad_name}")

This is my pipeline:

Below I also attach the logs of the execution with the nvvideoconvert added in each decodebin prior to the nvstreammux. As you can see, the execution freezes and does not process any incoming video data.

info.logs (18.8 KB)

Can you pass me some code snippet or suggestion regarding a python example where an nvvideoconvert is added, as indicated in the documentation?

Best regards

There is currently no similar usage in Python. But we have examples of linking nvvideoconvert after multiple sources for C/C++: sources\apps\sample_apps\deepstream-dewarper-test\deepstream_dewarper_test.c.

Code

for (i = 0; i < num_sources; i++) {
guint source_id = 0;

GstPad *mux_sinkpad, *srcbin_srcpad, *dewarper_srcpad, *nvvideoconvert_sinkpad;
gchar pad_name[16] = { };
GstElement *source_bin = create_source_bin (i, argv[arg_index++]);

if (!source_bin) {
  g_printerr ("Failed to create source bin. Exiting.\n");
  return -1;
}

source_id = atoi(argv[arg_index++]);

/* create nv dewarper element */
nvvideoconvert = gst_element_factory_make ("nvvideoconvert", NULL);
if (!nvvideoconvert) {
  g_printerr ("Failed to create nvvideoconvert element. Exiting.\n");
  return -1;
}

caps_filter = gst_element_factory_make ("capsfilter", NULL);
if (!caps_filter) {
  g_printerr ("Failed to create capsfilter element. Exiting.\n");
  return -1;
}

GstCaps *caps = gst_caps_new_simple ("video/x-raw", "format", G_TYPE_STRING, "RGBA", NULL);
GstCapsFeatures *feature = gst_caps_features_new (MEMORY_FEATURES, NULL);
gst_caps_set_features (caps, 0, feature);

g_object_set (G_OBJECT (caps_filter), "caps", caps, NULL);
.....}

Could you see if this can help you?

Hi yuweiw

I have tried your approach but the result is the same, I can’t get it to execute anything, the elements are added and the link is made in an “apparently” correct way.

def _cb_newpad(decodebin, pad, data):

    loggers['info'].info("Creating cb_newpad")

    caps = pad.get_current_caps()
    gststruct = caps.get_structure(0)
    gstname = gststruct.get_name()

    source_id, pipeline, streammux_video, streammux_audio = data

    loggers['info'].info(f"gstname={gstname}")
    pad_name = "sink_%u" % source_id
    
    try: 

        if gstname.find("video") != -1 and streammux_video is not None:
               
            videoconvert_input = Gst.ElementFactory.make("nvvideoconvert", f"videoconvert_input_{source_id}")
            if not videoconvert_input:
                loggers['error'].error(f"Unable to create videoconvert_input for source {source_id}")
                return
        
            pipeline.add(videoconvert_input)
            
            videocapsfilter_input = Gst.ElementFactory.make("capsfilter", f"video_capsfilter_input_{source_id}")
            if not videocapsfilter_input:
                loggers['error'].error(f"Unable to create video_capsfilter_input for source {source_id}")
                return
            
            caps1 = Gst.Caps.from_string("video/x-raw(memory:NVMM), format=RGBA")
            videocapsfilter_input.set_property("caps", caps1)
            
            pipeline.add(videocapsfilter_input)
            videoconvert_input.link(videocapsfilter_input)
            
            mux_sinkpad = streammux_video.get_request_pad(pad_name)
            if not mux_sinkpad:
                loggers['error'].error(f"Unable to create video sink for source {source_id}")
                return
            
            nvvideoconvert_sinkpad = videoconvert_input.get_static_pad("sink")
            if not nvvideoconvert_sinkpad:
                loggers['error'].error(f"Unable to create video sink pad for nvvideoconvert in source {source_id}")
                return
            
            if pad.link(nvvideoconvert_sinkpad) == Gst.PadLinkReturn.OK:
                loggers['info'].info(f"Decodebin {pad_name} linked to nvvideoconvert")
            else:
                loggers['error'].error(f"Cant link decodebin {pad_name} to nvvideoconvert")
                
            videocapsfilter_srcpad = videocapsfilter_input.get_static_pad("src")
            if not videocapsfilter_srcpad:
                loggers['error'].error(f"Unable to create src pad for videocapsfilter_input_{source_id}")
                return

            if videocapsfilter_srcpad.link(mux_sinkpad) == Gst.PadLinkReturn.OK:
                loggers['info'].info(f"Video preprocessing {pad_name} linked to pipeline")
    
    except Exception as e:
        loggers['error'].error(f'Cant get configure {gstname} for {source_id}: {e}')

Can you check that I am not missing anything?

Best regards

Hi again,

I noticed that inside the cb_newpad I don’t do the following that is done in the example:

/* Need to check if the pad created by the decodebin is for video and not
   * audio. */
  if (!strncmp (name, "video", 5)) {
    /* Link the decodebin pad only if decodebin has picked nvidia
     * decoder plugin nvdec_*. We do this by checking if the pad caps contain
     * NVMM memory features. */
    if (gst_caps_features_contains (features, GST_CAPS_FEATURES_NVMM)) {
      /* Get the source bin ghost pad */
      GstPad *bin_ghost_pad = gst_element_get_static_pad (source_bin, "src");
      if (!gst_ghost_pad_set_target (GST_GHOST_PAD (bin_ghost_pad),
              decoder_src_pad)) {
        g_printerr ("Failed to link decoder src pad to source bin ghost pad\n");
      }
      gst_object_unref (bin_ghost_pad);
    } else {
      g_printerr ("Error: Decodebin did not pick nvidia decoder plugin.\n");
    }
  }
}

In addition, I create the elements dynamically inside the cb_newpad by whether the pad corresponds to video or audio.

Does this have any influence?

Best regards

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

Could you refer to our demo code to use multiple sources deepstream_test_3.py? Then you can add the nvvideoconvert after this: source_bin.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.