I want to reduce the memory usage of the RAM, hence I tried
from imageai.Prediction.Custom import CustomImagePrediction
import tensorflow as tf
import cv2
gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.192)
sess = tf.Session(config = tf.ConfigProto(gpu_options = gpu_options))
prediction = CustomImagePrediction()
prediction.setModelTypeAsDenseNet()
prediction.setModelPath(“Densenet.h5”)
prediction.setJsonPath(“model_class.json”)
prediction.loadModel(num_objects=5)
I Got
2019-05-08 10:42:12.682317: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:864] ARM64 does not support NUMA - returning NUMA node zero
2019-05-08 10:42:12.682515: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1392] Found device 0 with properties:
name: NVIDIA Tegra X2 major: 6 minor: 2 memoryClockRate(GHz): 1.3005
pciBusID: 0000:00:00.0
totalMemory: 7.66GiB freeMemory: 4.31GiB
2019-05-08 10:42:12.682572: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1471] Adding visible gpu devices: 0
2019-05-08 10:42:13.556847: I tensorflow/core/common_runtime/gpu/gpu_device.cc:952] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-05-08 10:42:13.556958: I tensorflow/core/common_runtime/gpu/gpu_device.cc:958] 0
2019-05-08 10:42:13.556995: I tensorflow/core/common_runtime/gpu/gpu_device.cc:971] 0: N
2019-05-08 10:42:13.557176: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1084] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 1506 MB memory) → physical GPU (device: 0, name: NVIDIA Tegra X2, pci bus id: 0000:00:00.0, compute capability: 6.2)
2019-05-08 10:42:54.853030: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1471] Adding visible gpu devices: 0
2019-05-08 10:42:54.853191: I tensorflow/core/common_runtime/gpu/gpu_device.cc:952] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-05-08 10:42:54.853229: I tensorflow/core/common_runtime/gpu/gpu_device.cc:958] 0
2019-05-08 10:42:54.853252: I tensorflow/core/common_runtime/gpu/gpu_device.cc:971] 0: N
2019-05-08 10:42:54.853500: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1084] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 1506 MB memory) → physical GPU (device: 0, name: NVIDIA Tegra X2, pci bus id: 0000:00:00.0, compute capability: 6.2)
I closed the session then ran again
2019-05-08 10:56:09.342301: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:864] ARM64 does not support NUMA - returning NUMA node zero
2019-05-08 10:56:09.342495: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1392] Found device 0 with properties:
name: NVIDIA Tegra X2 major: 6 minor: 2 memoryClockRate(GHz): 1.3005
pciBusID: 0000:00:00.0
totalMemory: 7.66GiB freeMemory: 3.83GiB
Seems like some block of memory is not released.
When I ran the model after boot, i had “lfb ~1600x4MB”, when I run the model third time “lfb 622x4MB”
Whats the best way to restrict imageai library to use less RAM or the RAM we declar ?
Thank you