Deep Stream Custom Plugin Advice

Hi,

I am trying to integrate a fairly complex image pre-processing algorithm into DS (prior to detection) and would like to make sure I’m doing everything the right way. The algorithm is implemented in C++ with OpenCV support and is aimed to run on a standard video (each image is processed independently).

To improve run-time performance of the algorithm, I’ve used both caching allocators for CPU and GPU memory and registered memory for the interface between the CPU and GPU. These actions happen once at the beginning of the algorithm and take quite a bit of time (since the application is video the first frame is negligible).

Ideally I would like to create a custom plugin for DS but I’m not sure how to integrate these parts into the plugin.

Any help will be appreciated.
Yuval

1 Like

Simplest way is to use dsexample custom plugin, NVIDIA DeepStream SDK Developer Guide — DeepStream 6.1.1 Release documentation, note that the sample uses opencv without cuda, you need to build opencv with cuda beforehand ( jetson comes with opencv without cuda because it relies on opencv extra modules package , which you need to install manually for opencv license), regarding the allocators Deepstream takes care of video frames gpu mem allocations and sync, if you have extra memory being allocated the allocator can be integrated in dsexample too and in gst init stage code.

1 Like

Hi,

I’ve been looking at the DS plugin example for the past few days and it seems to be overly complicated (over 1000 lines of code for a simple gaussian blur operation). My end goal is to incorporate a pre-processing stage before inference and utilize the hardware accelerated infrastructure for the image capturing pipeline. I was thinking about doing the following:

  1. Using Gstreamer with an “appsink” in order to capture the image from the camera (to my understanding this will be highly optimized as it relies on DS).
  2. Using a callback function to read from the appsink and apply the preprocessing and the inference using (TRT). Alternatively, I could write back from the callback function back to the pipeline and use nvinfer.
    What do you think?

In addition, I noticed that when capturing an image using Gstreamer/DS a NVMM memory is used. All examples I’ve seen convert it to a CPU Mat but ideally I would like to keep it on the GPU. How do I move from a GstBuffer (from the appsink callback function) to a GPU Mat?

Thanks
Yuval

There is no update from you for a period, assuming this is not an issue any more.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

Hi @Yuvalg1987,
Could you provide your setup / platform information as other topics?

Thanks!