Bump in memory Usage due to multiple CUDA libraries

My deployments needs me to use multiple CUDA libraries. Previously when I tried using NPP library along with caffe library, I saw a bump in memory usage by a linux process. This is visible with top utility of linux. Similary when I use caffe and TensorRT, I see a similar bump in global memory usage.
For example When I run 1 network with caffe and 1 with tensorrt, my memory usage increases beyond 20%. But when I use both the networks with caffe, my memory usage remains less than 15%. (Note % is the data from %MEM field of linux “top” utility)

Similar behavior I saw with NPP also, wherein we just used 2 functions from NPP and we saw a bump in %MEM usage. we had to implement different CUDA kernels to avoid the usage of NPP and keep memory usage within limits.

Question: Is there something which I am missing, or can we do something to keep the memory limit same but use both the libraries caffe and tensorRT.

Linux Distro - L4T 28.2
GPU type - Tegra TX1
CUDA - 9.0
CUDNN - 7.1.5
TENSORRT - 4.1.3

Thanks in Advance


AFAIK, there’s no control from TRT to limit CUDA memory, especially regarding System/CPU memory.