Buffer Conversion Failed

I ran the deepstream app with model YOLO v3 on FP16 mode with GPU-ID=3 in the config files and I get the following error trace…

Side Note: GPU 0 is full with another application running. GPU being used are TESLA V100s.

Error Trace:

**PERF: FPS 0 (Avg)
**PERF: 0.00 (0.00)
** INFO: <bus_callback:163>: Pipeline ready

 

** INFO: <bus_callback:149>: Pipeline running

 

Creating LL OSD context new
0:00:57.624387568 31136 0x55ec9f8c4f20 ERROR                nvinfer gstnvinfer.cpp:976:get_converted_buffer:<primary_gie_classifier> cudaMemset2DAsync failed with error cudaErrorMemoryAllocation while converting buffer
0:00:57.624409938 31136 0x55ec9f8c4f20 WARN                 nvinfer gstnvinfer.cpp:1246:gst_nvinfer_process_full_frame:<primary_gie_classifier> error: Buffer conversion failed
ERROR from primary_gie_classifier: Buffer conversion failed
Debug info: gstnvinfer.cpp(1246): gst_nvinfer_process_full_frame (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie_classifier
Quitting
App run failed

I was thinking it is because the deepstream application uses some memory of GPU 0 (which is being fully utilized by another application). Can I fix it be tweaking the nvbuffer-memory-type or the cudadec-memory-type parameters in the config file?