Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) dGPU from inside docker
• DeepStream Version 7
• TensorRT Version 10
• Issue Type question
• Requirement details
Hello,
I am developing a custom GStreamer element for DeepStream 7.1 (TensorRT 10, NVMM zero-copy pipeline).
My plugin receives video/x-raw(memory:NVMM) buffers (RGBA) after nvstreammux + nvvideoconvert, and I want to run custom CUDA / TensorRT inference in a backend thread.
In DeepStream ≤6.x, examples and posts reference:
-
NvDsBatchMeta->batch_buf -
NvDsBatchMeta->surface -
NVDS_USER_META_NVBUF_SURFACE
However, in DeepStream 7.x, none of these exist in the public headers (nvdsmeta.h, gstnvdsmeta.h).
I tried the following approaches, all of which fail or crash:
-
gst_buffer_map()→map.data(segfault / invalid) -
iterating
batch_user_meta_list -
looking for
NVDS_USER_META_NVBUF_SURFACE -
accessing
NvDsBatchMetafields (no surface present)
I found that NVMM buffers are DMA-BUF backed in DS-7, so I attempted this approach:
#include <gst/allocators/gstdmabuf.h>
#include "nvbufsurface.h"
static NvBufSurface *
get_nvbufsurface_from_gstbuffer (GstBuffer *buf)
{
GstMemory *mem = gst_buffer_peek_memory (buf, 0);
if (!gst_is_dmabuf_memory (mem))
return nullptr;
int fd = gst_dmabuf_memory_get_fd (mem);
if (fd < 0)
return nullptr;
NvBufSurface *surface = nullptr;
if (NvBufSurfaceFromFd (fd, (void **) &surface) != 0)
return nullptr;
return surface;
}
This compiles, but I would like confirmation from NVIDIA:
-
Is DMA-BUF →
NvBufSurfaceFromFd()the official and supported way in DeepStream 7.x to retrieveNvBufSurfacein custom plugins? -
Is this how
gst-nvinferinternally accesses NVMM buffers in DS-7? -
Are there any lifetime / synchronization constraints when using this surface in a backend thread (buffer is
gst_buffer_ref()’d)? -
Is there any other recommended public API for this use case?
This information does not seem to be documented clearly in the DeepStream 7 SDK docs, and examples never explicitly retrieve the surface.
Thank you for clarification.