How do I do a custom image warp in a deepstream pipeline

Hi Guys,

I am working on a pipeline where I want to warp my input image before sending it to an inference step. I have not started working on the inference part yet, so this question is purely about how to prepare my video before feeding it to an inference step.

First some backstory

I have been prototyping the system before in a browser using webgl. What I would do there was to setup a custom gl canvas, with a quad covering the entire viewport, and feed my source image to my shader as a texture. This would allow me to unwarp my source image with my custom warp function and render an image of a different size than my input image.

This is all fine and dandy but the prototype runs in the browser. I need to do the same in a deepstream/gstreamer pipeline, and here is where I need some advice.

I have been looking at the gstglshader plugin and that can run a custom shader, but it requires memcopy from cpu to gpu and it forces me to use the same output image size as the input. This does not look like the way to go in a deepstream pipeline? (it seems old and not nvidia memory optimized)

I guess I need to do the operation using a cuda kernel, to avoid having to do memcopy.

I looked at the nvsample_cudaprocess, that use the nvivafilter, but it does not reveal all the bits and pieces I need to fullfill my mission.

What remains open questions to me is:

How do I:

  • run custom cuda kernels. Is nvivafilter a good start?
  • setup a cuda kernel in a gstreamer pipeline that produce a different size output image?
  • do bilinear sampling in my source image
  • handle color spaces. (I am used to RGB space from my GLSL shaders)
  • sideload custom data to the “shader”, like i send variables to a GLSL shader.

Some pointers or reference to an example that does something similar would be highly appreciated.

Kind regards

Jesper

What do you want deepstream inference to do ?

I have an fisheye image that has large field of view but is somewhat distorted, preventing me to use standard detectors. I want to sample part of that as what a regular pinhole camera would see from within that image at some orientation. Then I will feed that image to a standard object detector and use the detections to adjust the direction of my virtual camera. Essentially producing an automated object followcam with no moving parts.

Does it make sense?

Sorry for late.

Upcoming release (DS 5.0) will have an example for showcasing how to use openCV for filtering applications.
We will give enhancement to support additional projections / fish-eye lens filtering in dewarper.

How can I add a dewarper in front of the detector in the deepstream-test5?

Hi Rusli,

Please help to open a new topic for your issue. Thanks