I’m using the Tegra Multimedia API (r32.5) to grab images from my v4l2 cameras, process them in CUDA and then encode them in H265. I have implemented this by using the samples provided. It works perfectly, and I can render and view the final encoded image in the EGLRenderer as well. The rendering is done using the fd of the NvBuffer (as per the samples).
Now instead of rendering the encoded output, I want to convert/wrap the encoded output to a GstBuffer so that I can push it into a GStreamer pipeline.
I would appreciate any pointers on how to send the NvVideoEncoder output to GStreamer.