I am working on a pipeline where I want to warp my input image before sending it to an inference step. I have not started working on the inference part yet, so this question is purely about how to prepare my video before feeding it to an inference step.
First some backstory
I have been prototyping the system before in a browser using webgl. What I would do there was to setup a custom gl canvas, with a quad covering the entire viewport, and feed my source image to my shader as a texture. This would allow me to unwarp my source image with my custom warp function and render an image of a different size than my input image.
This is all fine and dandy but the prototype runs in the browser. I need to do the same in a deepstream/gstreamer pipeline, and here is where I need some advice.
I have been looking at the gstglshader plugin and that can run a custom shader, but it requires memcopy from cpu to gpu and it forces me to use the same output image size as the input. This does not look like the way to go in a deepstream pipeline? (it seems old and not nvidia memory optimized)
I guess I need to do the operation using a cuda kernel, to avoid having to do memcopy.
I looked at the nvsample_cudaprocess, that use the nvivafilter, but it does not reveal all the bits and pieces I need to fullfill my mission.
What remains open questions to me is:
How do I:
- run custom cuda kernels. Is nvivafilter a good start?
- setup a cuda kernel in a gstreamer pipeline that produce a different size output image?
- do bilinear sampling in my source image
- handle color spaces. (I am used to RGB space from my GLSL shaders)
- sideload custom data to the “shader”, like i send variables to a GLSL shader.
Some pointers or reference to an example that does something similar would be highly appreciated.