Pass a frame from a DeepStream pipeline to an NPP function

• Hardware Platform (Jetson / GPU)
Jetson, NVIDIA Xavier NX 8GB
• DeepStream Version
DeepStream 6.3
• JetPack Version (valid for Jetson only)
JetPack 5.1.3-b29
• TensorRT Version
8.5.2.2
• Issue Type( questions, new requirements, bugs)
questions

At a certain point in my Deepstream pipeline, I periodically need to retrieve a frame and check its average pixel value. To accomplish this, I decided to use the nppiMean_8u_C1R function inside the osd_sink_pad_buffer_probe(GstPad *pad, GstPadProbeInfo *info, gpointer u_data) callback.

Here’s what I do:

GstBuffer *buf = (GstBuffer *) info->data;
GstMapInfo in_map;
gst_buffer_map(buf, &in_map, GST_MAP_READ);
NvBufSurface *surface = (NvBufSurface *) in_map.data;
unsigned char *d_img = (unsigned char *) surface->surfaceList[0].dataPtr;

At this point, I have a pointer d_img to the frame’s pixel data.

However, when I try to use d_img as the first argument in the nppiSum_8u_C1R function, I encounter a cudaErrorIllegalAddress error.

From what I understand, the issue is that d_img points to data with memType == NVBUF_MEM_SURFACE_ARRAY, while nppiSum_8u_C1R doesn’t seem to support this memory type.

My questions are:

  1. Are NPP functions able to work with memType == NVBUF_MEM_SURFACE_ARRAY?
  • If YES, why am I getting this error?
  • If NO, what are my options:
    • Can I use NvBufSurfTransform() to convert the frame object from NVBUF_MEM_SURFACE_ARRAY to another memory type?
    • Can I use cudaMemcpy() to store the frame object in a compatible memory type?
    • Can I use NvBufSurfaceMapEglImage()?
    • Or is there perhaps another recommended approach?

Which option would be the best for my case?

What’s your whole pipeline like? You can set the compute-hw to some plugins to use the GPU memory.

Here is my pipeline:

Could you please clarify, how to use compute-hw to pass a frame to an NPP function?

I mean you can try to set the compute-hw=1 and nvbuf-memory-type=3 to the nvvideoconvert plugins in your pipeline. Then use this mem-type to see if your code runs properly.

@yuweiw , I set the compute-hw=1 and nvbuf-memory-type=3 to the nvvideoconvert plugins in my pipeline and received the following error:

===== NvVideo: NVENC =====
NvMMLiteBlockCreate : Block : BlockType = 4 
/dvs/git/dirty/git-master_linux/nvutils/nvbufsurftransform/nvbufsurftransform.cpp:4332: => Surface type not supported for transformation NVBUF_MEM_CUDA_UNIFIED

ERROR from element source: Internal data stream error.
Error details: gstbasesrc.c(3072): gst_base_src_loop (): /GstPipeline:pipeline/GstV4l2Src:source:
streaming stopped, reason error (-5)
Returned, stopping playback
nvstreammux: Successfully handled EOS for source_id=0

Actually, I expected this error, since NVBUF_MEM_CUDA_UNIFIED (which corresponds to nvbuf-memory-type=3) is unsupported at Jetson, according to the docs: Frequently Asked Questions — DeepStream documentation 6.4 documentation

So what should I do to pass a frame to an NPP function?

OK. Since the npp cannot support the NVBUF_MEM_SURFACE_ARRAY type directly. You can map the buffer out first.
About how to map the buffer out from the nvbufsurface, you can refer to our source code sources\gst-plugins\gst-dsexample\gstdsexample_optimized.cpp.

convert_batch_and_push_to_process_thread
1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.