I see a “glEGLImageTargetTexture2DOES” is already available somehow in the code, so there is some OES sensible code.
It’s not clear to me, how the basic workflow would be.
I have a EGLStream between a Consumer and a Producer.
On the producer side I’d like to draw the gstreamer EGL data directly into the EGL Stream to avoid any CPU copying or else.
The route would be like:
SRC → (NV pipeline H.264 decoding to EGL (keep the image in VRAM)-> EGLSINK (puts the stream data to EGL) → ?? (some interface in the code to connect the sink and the stream) → Producer is drawing with provided texture → Consumer is getting new frame and rendering
I’m not sure if I’m on the right path here or if there is some misunderstanding of mine.
The basic intention is to have gstreamer pipeline that is processing some source ‘into’ EGL and give it then directly to the stream.
Add: Just discovered nvvideosink that has a “stream” property. I think this is the way to go.