Sending device memory buffers to nvhdmioverlaysink

Hi,

I need to push my own application-generated CUDA image buffers to a Gstreamer pipeline like this:

My application -> appsrc -> nvhdmioverlaysink

So far I’ve managed to make it work using pinned memory, and not setting any memory type when linking appsrc and nvhdmioverlaysink. This works, but I think nvhdmioverlaysink assumes the buffer to be host paged memory, and somehow a host->device copy ends up taking place.

I’ve tried to set the “memory:NVMM” feature when linking appsrc and nvhdmioverlaysink, but sadly it all explodes into errors like these:

NvxBaseWorkerFunction[2481] comp OMX.Nvidia.render.hdmi.overlay.yuv420 Error -2147479552

Any ideas about how to make this work?

Anyone? Are there other forums where I could post this question?

This is a good forum for anything Jetson TK1, but it is possible one of the forums here would also be good when the question isn’t specific to a JTK1:
https://devtalk.nvidia.com/default/board/53/accelerated-computing/

Hello, EmilioG:
‘memory:NVMM’ is not a public structure and can only be used among plug-ins provided by NV.

br
ChenJian

Thanks jachen for the confirmation. At least I will not be wasting my time attempting to make it work.

Not being able to inject NVMM buffers from application into a GStreamer pipeline is terribly sad in terms of architecting for perfomance. Please consider fixing this at some point.