CUDA_ERROR_OUT_OF_MEMORY HELP!!!

I am running some RCNN models with my GTX 1070, it only works when I freshly start the PC. However, CUDA_ERROR_OUT_OF_MEMORY happens if I run the program twice. Even though I completely quit my terminal and program. The memory does not refresh. Then I had to restart my PC which is annoying. Do I need to clear GPU caches or what should I do with the errors below?

Using TensorFlow backend.
2018-02-10 14:41:52.156792: I tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
2018-02-10 14:41:52.257698: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:892] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2018-02-10 14:41:52.257918: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1030] Found device 0 with properties:
name: GeForce GTX 1070 major: 6 minor: 1 memoryClockRate(GHz): 1.683
pciBusID: 0000:01:00.0
totalMemory: 7.92GiB freeMemory: 76.25MiB
2018-02-10 14:41:52.257945: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1120] Creating TensorFlow device (/device:GPU:0) → (device: 0, name: GeForce GTX 1070, pci bus id: 0000:01:00.0, compute capability: 6.1)
2018-02-10 14:41:52.259420: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 76.25M (79953920 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY
2018-02-10 14:41:54.946215: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1120] Creating TensorFlow device (/device:GPU:0) → (device: 0, name: GeForce GTX 1070, pci bus id: 0000:01:00.0, compute capability: 6.1)
2018-02-10 14:41:56.051866: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 7.62M (7995392 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY

use nvidia-smi to list the processes that are active on the GPU in question.

Then manually kill each of those processes.

Thank you!!!