I’m trying to figure out the best way to take the cuda NV12 output of a cuEGLStreamConsumer and pass it through an appsrc to the omxh264enc.
Right now the cuda is being converted from NV12 to RGBAf for use with jetson-inference. Then I use a kernel to convert RGBAf to RGB which goes through the appsrc into a videoconvert element that converts it back to I420.
NV12 (cuda) -> RGBAf (cuda) -> RGB -> I420 -> h264
I feel like i should be able to pass the NV12 directly into the omxh264enc element, eliminating some of those copies.
Can I just take the luma and chroma surfaces, copy them into a gstbuffer, and set the caps as NV12? Has anyone done something similar?
Your case probably works better with tegra_multimedia_api. Please install the samples and check tegra_multimedia_api\include\nvbuf_utils.h
I see that there is a NvEGLImageFromFd function, but how do I go the other way? Turn the CUeglFrame into a dmabuf_fd for NvBufferCreate?
Then would I assume I would use an NvVideoEncoder and send the result through an appsrc to my pipeline?
Can you check if you can use NvBuffer for ‘RGBAf (cuda)’ in your pipeline?
Below is pseudo code:
eglimage = NvEGLImageFromFd(fd);
Please refer to cuda_postprocess() in tegra_multimedia_api\samples\12_camera_v4l2_cuda
That sample doesn’t seem to be using Argus.
If I wanted to use Argus would you suggest using a UniqueObj to get NvBuffers and create the eglimage from their file descriptor?
Does that keep everything in device memory? AastaLLL recommended switching from FrameConsumer to a cuEGLStreamConsumer so that I could get CUeglFrames directly for passing into a jetson-inference net.
Is the NvBuffer just a CUeglFrame with extra wrapper?
Thanks for your help!
Both NvBuffer and CUeglFrame are wrappers to the same device memory.
For Arugs -> NvVideoEncoder, you can also refer to tegra_multimedia_api\samples\10_camera_recording