Resource exhausted: OOM when allocating tensor with shape[256]

I am getting OutOfMemory exception, How to resolve this issue?

Python version :3.6
Jetson Nano Jetpack version: 4.4
Tensor flow version: tensorflow==1.15.2+nv20.6
Keras: 2.0.5
I have attached the source code as image file

Error :

tensorflow/core/framework/] OP_REQUIRES failed at assign_op.h:117 : Resource exhausted: OOM when allocating tensor with shape[256] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc

Source code

Refer the Logs, generated while start executing

“adding visible gpu devices: 0”

is GPU not allocated for the job?


Could you check if the configure shared in this topic helps or not first?


I have improved the performance by specifying configs per_process_gpu_memory_fraction =0.2, allow_growth =True and visible_device_list=“0”. But at one point in time, Swap memory reaches “100%” and System freezed. sometimes System give below error message as shown in the screenshot.

Could you suggest to resolve this issue?

You could increase the size of swap, but that only goes so far since the GPU needs physical RAM. Other processes can be swapped, but if the actual GPU requires more RAM, then you’d need your code to be rearranged to use less memory.

I have tried hard limit on the GPU. I have created swap file of size 6 GB. But again i got Out of memory exception on Jetson Nano.
I have restricted TensorFlow to only allocate 2560 MB (memory_limit=2560) of memory on the first GPU
I am using Stanford Cars Dataset

this is the source code

Please help me to solve this issue

Your approach is correct, but I don’t know enough about the AI side to provide useful methods of limiting memory for your case. Someone else will probably have a way to reduce the required memory, but this is a typical reason to use something which has more RAM, e.g., the NX is the next step up and it has 8GB of RAM (the NX is a newer GPU architecture and six cores as well, and seems to work on the same carrier board a Nano b02 or newer would use).