Hi everyone
- Hardware Platform (Jetson / GPU): RTX2080TI
- DeepStream Version: 6.2 (python bindings)
- TensorRT Version: 8.5.2-1+cuda11.8
- NVIDIA GPU Driver Version (valid for GPU only): 530.30.02
- Issue Type( questions, new requirements, bugs): Question
I have been trying to use the NEW NVSTREAMMUX. I have been following the recommendations in the documentation. Among the parameters that have been removed from the OLD NVSTREAMMUX are:
- width: N/A; Scaling and color conversion support Deprecated.
- height: N/A; Scaling and color conversion support Deprecated.
So, in cases where we are dealing with sources of different resolutions we find the following solution according to the documentation:
In this scenario, DeepStream recommends adding nvvideoconvert + capsfiler before each nvstreammux sink pad (enforcing same resolution and format of all sources connecting to new nvstreammux). This ensure that the heterogeneous nvstreammux batch output have buffers of same caps (resolution and format).
I have tried to create a pipeline with multiple sources, making use of the NVSTREAMMUX component as discussed in the solution, adding an nvvideoconvert in between, before the NVSTREAMMUX. However, I can’t get it to work. I add these elements in the cb_newpad, as shown below:
Best regards
def _cb_newpad(decodebin, pad, data):
loggers['info'].info("Creating cb_newpad")
caps = pad.get_current_caps()
gststruct = caps.get_structure(0)
gstname = gststruct.get_name()
source_id, pipeline, streammux_video = data
loggers['info'].info(f"gstname={gstname}")
pad_name = "sink_%u" % source_id
if gstname.find("video") != -1 and streammux_video is not None:
queue_input = Gst.ElementFactory.make("queue", f"video_queue_input_1_{source_id}")
if not queue_input:
loggers['error'].error(f"Unable to create queue_input for source {source_id}")
return
pipeline.add(queue_input)
decodebin.link(queue_input)
videoconvert_input = Gst.ElementFactory.make("nvvideoconvert", f"videoconvert_input_{source_id}")
if not videoconvert_input:
loggers['error'].error(f"Unable to create videoconvert_input for source {source_id}")
return
pipeline.add(videoconvert_input)
queue_input.link(videoconvert_input)
queue_input_2 = Gst.ElementFactory.make("queue", f"video_queue_input_2_{source_id}")
if not queue_input_2:
loggers['error'].error(f"Unable to create queue_input for source {source_id}")
return
pipeline.add(queue_input_2)
videoconvert_input.link(queue_input_2)
srcpad = queue_input_2.get_static_pad("src")
if not srcpad:
loggers['error'].error(f"Unable to create src pad for source {source_id}")
return
sinkpad = streammux_video.get_request_pad(pad_name)
if not sinkpad:
loggers['error'].error(f"Unable to create sink for source {source_id}")
return
if srcpad.link(sinkpad) == Gst.PadLinkReturn.OK:
loggers['info'].info(f"Decodebin {pad_name} linked to pipeline")
else:
loggers['info'].info(f"Failed to link decodebin: {pad_name}")
However, when I don’t add nvvideoconvert I don’t have any problem and I can run the pipeline normally, but without being able to rescale the incoming video. And I need this. I add the example without this element:
def _cb_newpad(decodebin, pad, data):
loggers['info'].info("Creating cb_newpad")
caps = pad.get_current_caps()
gststruct = caps.get_structure(0)
gstname = gststruct.get_name()
source_id, pipeline, streammux_video = data
loggers['info'].info(f"gstname={gstname}")
pad_name = "sink_%u" % source_id
if gstname.find("video") != -1 and streammux_video is not None:
sinkpad = streammux_video.get_request_pad(pad_name)
if not sinkpad:
loggers['error'].error(f"Unable to create sink for source {source_id}")
return
if pad.link(sinkpad) == Gst.PadLinkReturn.OK:
loggers['info'].info(f"Decodebin {pad_name} linked to pipeline")
else:
loggers['info'].info(f"Failed to link decodebin: {pad_name}")
This is my pipeline:
Below I also attach the logs of the execution with the nvvideoconvert added in each decodebin prior to the nvstreammux. As you can see, the execution freezes and does not process any incoming video data.
info.logs (18.8 KB)
Can you pass me some code snippet or suggestion regarding a python example where an nvvideoconvert is added, as indicated in the documentation?
Best regards