Please provide complete information as applicable to your setup.
• Hardware Platform: GTX 1080
• DeepStream Version: 5.0
• NVIDIA GPU Driver Version (valid for GPU only): 440.33.01
I have two GTX 1080, both of which are added to deepstream docker. I run my pipeline using gpu-id 1 for all the plugins in the pipeline. Whenever I use nvinferserver plugin I observe that a constant 507 Mib is occupied on GPU 0 having the same PID as the application running on GPU 1 also. There is no GPU-util for GPU 0 neither are any plugin set to gpu-id 0. I tried filling up the total memory of GPU 0 and then run the pipeline, it throws this error
A non-primary context 0x561bd21c7090 for device 0 exists before initializing the StreamExecutor. The primary context is now 0x561bd21ca490. We haven’t verified StreamExecutor works with that.
2020-09-15 11:20:11.749658: F tensorflow/stream_executor/lib/statusor.cc:34] Attempting to fetch value instead of handling error Internal: failed initializing StreamExecutor for CUDA device ordinal 0: Internal: failed call to cuDevicePrimaryCtxRetain: CUDA_ERROR_OUT_OF_MEMORY: out of memory; total memory reported: 8513978368
Aborted (core dumped)