GStreamer filters vs OpenCV

Hi all

I find the colour reproduction quite washed out using the Pi2 camera and was hoping to be able to boost the contrast/saturation through gstreamer (nvarguscamerasrc) pipeline. Is this possible or should I do it in opencv (which will add considerable overhead I’d imagine)

Many thanks,

If the operations you intend to perform on frames are supported as GpuMats with opencv_cuda* libs, then you may try nvivafilter. Here is an example for fisheye undistort, you would adapt to your case (searching this forum you may find other examples but for older L4T and opencv versions).
Note that you can use nvivafilter without opencv, and directly use CUDA.

Fwiw, nvivafilter is not available on all platforms (not on x86), so if you want your plugin to work with other nvidia plugins on x86, you’ll need an alternative. There is some example code in DeepStream for a similar plugin to nvivafilter.

Right now, the example has code to convert to a standard CPU Mat by default, however if you want, you can tear this conversion out and operate on the NVMM buffers directly using CUDA. You can do a contrast operation (or any curve) with a 1d lookup table yourself (eg. an array of length 256, with each index mapping to an output value) or with npp, which has primitives to do that. You can probably use shared memory to store the lut itself (that’s my plan anyway). Unfortunately the license is propretary for the example so any modifications you make will be owned by Nvidia.

NVIDIA Corporation and its licensors retain all intellectual property
and proprietary rights in and to this software, related documentation
and any modifications thereto. Any use, reproduction, disclosure or
distribution of this software and related documentation without an express
license agreement from NVIDIA Corporation is strictly prohibited.

Or you could do what I’m doing and write a plugin and filters from scratch using gst-template, though please note that the template script is currently broken, so if you use it, you’ll have to fix some things in the generated code before it’ll build. None of the PRs or the author’s code actually fix the issue. You could also revert the last commit to master which breaks it or use the broken template and fix the broken stuff yourself. Feel free to fork my repo and the submodule since it’s in a more or less working state. It’s just boilerplate for now, but you can get an idea of what’s intended, and it has some basic tests.

Not sure what you mean here. You would get a NVMM pointer directly suitable for a GpuMat. My example is using CPU mats and uploads to GpuMats only in Init function done once beforehand, All frame processing is done with GpuMats.
Sorry if I misunderstand, did you mean something else ?

I was referring to the gst-dsexample.

Their example uses CudaMallocHost and converts to a cv::Mat even if you don’t need it, so there’s a note in the code that you might wish to remove it if you want to work directly with NV12/RGBA. In the example code the DsExampleProcess is given dsexample->cvmat->data to process. If you need the NvBufSurface*, you can get it like this.