Custom plugin based on gstdsexample vs nvivafilter?

Jetson Xavier NX / Jetpack 5.1.2 / DeepStream 6.2 / gstdsexample

Hi,
I’m trying to customise gstreamer plugin so that I can do some image processing.

for example, I want to remap or warp the input buffer to the output buffer.
and in the end, composit multiple remaped camera feed to the one parnoramic frame.

in this case, should I write a custom deepstream plugin based on gstdsexample changing transform_ip or transform function?
or should I write use the nvivafilter with custom cuda kernel (.cu)?

Thank you.

What pipeline will you use? Jetson accelerated GStreamer pipeline or DeepStream pipeline? DeepStream SDK FAQ - Intelligent Video Analytics / DeepStream SDK - NVIDIA Developer Forums

If it is Jetson accelerated GStreamer pipeline, nvivafilter is OK.

If it is DeepStream pipeline, you need to customize with Gst-nvdsvideotemplate — DeepStream 6.2 Release documentation

1 Like

@Fiona.Chen
Thank you for your quick response.

Basically I want to do the real-time video pipeline for below.

 multi nvarguscamera --> panorama (this will be the custom plugin)  --> encoding the panorama (gstreamer is enough)
                                                                    --> detection from panorma(deepstream needed)

So in this case, I should use deepstream for the best performance?
I haven’t check the nvdsvideotemplate before, but I was referencing only dsexample,

In short, I should use nvdsvideotemplate for the customisation.

Kind Regards,

What kind of panorama do you need? nvvideotemplate is not suitable for multiple inputs and one output case.

Fiona,
sorry for the late reply.

  1. the number of input is two (ex. left and right) and the output is one.
    I thought the tee element can handle the two remaped input to be composited.

  2. if you say nvvideotemplate is not suitable for it, which approach should I take?

Kind Regards,

No need. You may need two nvdspreprocess to provide the two inputs.

Nvdspreprocess Gst-nvdspreprocess (Alpha) — DeepStream documentation 6.4 documentation is for the models whose inputs are not single images. The two input images need two customized nvdspreprocess plugins.

Thank you so much Fiona, It is so helpful!
I will take a look :)

Kind Regards,

Hi Fiona,

I checked the plugin and I’m not sure if this is what I need or not.

let me elaborate more about the pipeline

Left nvarguscamera(NV12)  -->   
                              )--> Custom plugin (remap each input frames and blend into one frame) --> downstream Encoder --> ...
Right nvarguscamera(NV12) --> 

My pipeline, as you can see from the image above, remaps(or warps) the left and right streams.
And then composites them into one stitched image which will overlap and blend some part of remaped frames.

  1. in nvdspreparocess sample code, I don’t know where I can overlap the input frames and then output it into one frame.

  2. and when can I implement the remapping (as a cuda kernel)? I see the prepare_tensor in nvdspreprocess_imple.cpp. but I’m not sure because the point of the customisation is image processing, not the detection…

Kind Regards,

All DeepStream plugins work on batch. You need to use nvstreammux to batch the two sources from the two cameras. The nvdspreprocess sample library supports tensors with ROIs, if you configure the whole picture as the ROI, then the tensor is the whole image.

You don’t need to implement your mapping unless your model’s input tensor needs special preprocessing. Current nvdspreprocess sample libary support scaling, normalization and format conversion preprocessing.

Please read the sample code, every detail is available in the source codes.

Fiona,
My point of doing this is to implement the custom image processing(warping of two input → blending to one panorama). please refer the image that I attached above.

My suggestion is just for your reference.

Of cause you can implement your solution.

Thank you for your support

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.