Jetson Nano run tensorflow faster rcnn

i run tf-faster-rcnn on jeston nano,but test_faster_rcnn.sh fails running the following:

Loaded.
2019-09-02 17:16:52.083757: W tensorflow/core/common_runtime/bfc_allocator.cc:211] Allocator (GPU_0_bfc) ran out of memory trying to allocate 14.32MiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2019-09-02 17:17:04.198378: W tensorflow/core/common_runtime/bfc_allocator.cc:267] Allocator (GPU_0_bfc) ran out of memory trying to allocate 33.94MiB.  Current allocation summary follows.
2019-09-02 17:17:04.199946: I tensorflow/core/common_runtime/bfc_allocator.cc:597] Bin (256): 	Total Chunks: 28, Chunks in use: 28. 7.0KiB allocated for chunks. 7.0KiB in use in bin. 3.3KiB client-requested in use in bin.
2019-09-02 17:17:04.200113: I tensorflow/core/common_runtime/bfc_allocator.cc:597] Bin (512): 	Total Chunks: 17, Chunks in use: 16. 9.0KiB allocated for chunks. 8.2KiB in use in bin. 8.0KiB client-requested in use in bin.
2019-09-02 17:17:04.200757: I tensorflow/core/common_runtime/bfc_allocator.cc:597] Bin (1024): 	Total Chunks: 18, Chunks in use: 18. 18.5KiB allocated for chunks. 18.5KiB in use in bin. 18.1KiB client-requested in use in bin.
2019-09-02 17:17:04.204144: I tensorflow/core/common_runtime/bfc_allocator.cc:597] Bin (2048): 	Total Chunks: 51, Chunks in use: 50. 105.2KiB allocated for chunks. 102.5KiB in use in bin. 100.2KiB client-requested in use in bin.

......

ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[1,64,278,500] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
	 [[node MobilenetV1/Conv2d_1_pointwise/Conv2D (defined at /home/a/archiconda3/envs/tensorflow/lib/python3.6/site-packages/tensorflow/contrib/layers/python/layers/layers.py:1060)  = Conv2D[T=DT_FLOAT, data_format="NCHW", dilations=[1, 1, 1, 1], padding="SAME", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](MobilenetV1/Conv2d_1_depthwise/Relu6, MobilenetV1/Conv2d_1_pointwise/weights/read)]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

	 [[{{node MobilenetV1_2/rois/stack_1/_313}} = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_622_MobilenetV1_2/rois/stack_1", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

Am new to TF, would appreciate any advice on how to resolve the error.

Thank you in advance!

OOM - out of memory

You need to free up memory. Make sure to close all applications that you don’t need for your application, e.g. Chromium browser can eat a lot of memory. Another way to free up memory is to run Jetson Nano in “headless mode” (use forum search with that phrase to find out more), but this will be only feasible when you don’t need GUI for inference.