“MyFilter” is custom transform plugin which i buid for my own, but i don’t know how to get video frame from “nvvidconv”. Does anyone know how to get video frame from nvvidconv?
When i use “videoconvert”, i can get YUV data and correct size of one YUV frame. But with “nvvidconv”, size of data is very small (like size = 744). I want to get data in transform function like this:
Hi moondev,
The output of nvvidconv is not CPU-accessible buffer, that’s why you see the error.
So in your case, you need to do YUV to RGBA conversion, do post processing on RGBA buffer, and then render out?
Please share detail so that we can give suggestion.
Thank DaneLLL.
As you described, I want to do post processing on RGBA buffer and then render out.
I did it by following nvsample_cudaprocess example. This is new gstreamer pipeline:
I added my CUDA code in gpu_process() function and it worked.
But i have another problem when i want to display both original image and processed image side-by-side. Because nvoverlaysink seems to be only choices if i want to render RGBA buffer with above pipeline, then processed image is displayed in full screen and i don’t have any place to display original image.