I’m using VisionWorks 1.5.3 (L4T R24.2) on a Tegra X1. I’m creating an application, a portion of which will stream live video from an attached camera after some processing on the X1. I’ve been able to get a GStreamer pipeline using AppSrc up and running thanks to the help of these forums, but I’d really like to derive from nvxio::GStreamerBaseRenderImpl in the same way that nvxio::GStreamerVideoRenderImpl (accessed through nvxio::createVideoRender) does. This would allow taking advantage of being directly reading out the OpenGL buffers after rendering images, text, lines, point clouds, etc.
In earlier versions of VisionWorks, all the nvxio code (not just public headers) was in the samples directory and could be derived from. However, VisionWorks 1.5.3 (L4T R24.2) now just has a shared object and limited headers, so it doesn’t seem too easy to do what I want.
Is this possible? Or am I stuck re-inventing the wheel on this one?