Gst_dsexample_transform_ip:need NVBUF_MEM_CUDA_UNIFIED memory for opencv blurring

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) 1080Ti
• DeepStream Version 5.0
• NVIDIA GPU Driver Version (valid for GPU only) 440.100

I am running the gst-dsexample and it works until I turn on “blur-objects=true” and encounter the error message as titled.

The pipeline I am using to run is as below:

gst-launch-1.0 filesrc location= /data/agx/sample_1080p_h264.mp4 ! qtdemux ! h264parse ! nvv4l2decoder ! m.sink_0 nvstreammux name=m batch-size=1 width=1920 height=1080 ! nvinfer config-file-path= /opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream-test1/dstest1_pgie_config.txt ! nvvideoconvert ! dsexample full-frame=0 blur-objects=true ! nvdsosd ! nveglglessink

What should I do to set the NVBUF_MEM_CUDA_UNIFIED memory type?

I traced the code, at line 379 of gstdsexample.cpp indeed set the correct memory type when not in aarch64 :

#ifdef __aarch64__
  create_params.memType = NVBUF_MEM_DEFAULT;
#else
  create_params.memType = NVBUF_MEM_CUDA_UNIFIED;
#endif

what else do I need to set to run “blur-objects=true” in dGPU environment? Thanks a lot for your help.

never mind. Found the switch to set NVBUF_MEM_CUDA_UNIFIED memory type at nvvideoconvert element:

nvvideoconvert nvbuf-memory-type=3

so the revised pipeline should look like:

gst-launch-1.0 filesrc location= /data/agx/sample_1080p_h264.mp4 ! qtdemux ! h264parse ! nvv4l2decoder ! m.sink_0 nvstreammux name=m batch-size=1 width=1920 height=1080 ! nvinfer config-file-path= /opt/nvidia/deepstream/deepstream-5.0/sources/apps/sample_apps/deepstream-test1/dstest1_pgie_config.txt ! nvvideoconvert nvbuf-memory-type=3 ! dsexample full-frame=0 blur-objects=true ! nvdsosd ! nveglglessink

Case closed.