OOM error

Although no program was running before still while running it shows only around 250 MiB free of 7.66 GiB. I am working on a Tx2 Development kit and have checked that no process was running before running Re3.
I tried Re3 first on my Tesla where it only took around 2700 MiB of memory. Re3 is an object tracking network I just tried to run inference for it. Link for the same is https://gitlab.com/danielgordon10/re3-tensorflow. I installed tensorflow by the command given here https://docs.nvidia.com/deeplearning/frameworks/install-tf-jetsontx2/index.html.

I just started working on this board previously I only worked with Tesla K40.
I am running this inference as I would do on a Tesla board. Do I have to do something extra for running this inference?

Here is the whole error:

2019-06-04 17:25:06.069537: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:864] ARM64 does not support NUMA - returning NUMA node zero
2019-06-04 17:25:06.069673: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1392] Found device 0 with properties:
name: NVIDIA Tegra X2 major: 6 minor: 2 memoryClockRate(GHz): 1.3005
pciBusID: 0000:00:00.0
totalMemory: 7.66GiB freeMemory: 290.22MiB
2019-06-04 17:25:06.069724: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1471] Adding visible gpu devices: 0
2019-06-04 17:25:06.901734: I tensorflow/core/common_runtime/gpu/gpu_device.cc:952] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-06-04 17:25:06.901805: I tensorflow/core/common_runtime/gpu/gpu_device.cc:958] 0
2019-06-04 17:25:06.901832: I tensorflow/core/common_runtime/gpu/gpu_device.cc:971] 0: N
2019-06-04 17:25:06.901971: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1084] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 86 MB memory) -> physical GPU (device: 0, name: NVIDIA Tegra X2, pci bus id: 0000:00:00.0, compute capability: 6.2)
2019-06-04 17:25:17.165882: W tensorflow/core/common_runtime/bfc_allocator.cc:275] Allocator (GPU_0_bfc) ran out of memory trying to allocate 289.88MiB. Current allocation summary follows.
2019-06-04 17:25:17.166795: I tensorflow/core/common_runtime/bfc_allocator.cc:630] Bin (256): Total Chunks: 41, Chunks in use: 41. 10.2KiB allocated for chunks. 10.2KiB in use in bin. 1.9KiB client-requested in use in bin.
2019-06-04 17:25:17.167033: I tensorflow/core/common_runtime/bfc_allocator.cc:630] Bin (512): Total Chunks: 2, Chunks in use: 2. 1.0KiB allocated for chunks. 1.0KiB in use in bin. 768B client-requested in use in bin.
2019-06-04 17:25:17.167223: I tensorflow/core/common_runtime/bfc_allocator.cc:630] Bin (1024): Total Chunks: 9, Chunks in use: 9. 11.2KiB allocated for chunks. 11.2KiB in use in bin. 11.0KiB client-requested in use in bin.
2019-06-04 17:25:17.167386: I tensorflow/core/common_runtime/bfc_allocator.cc:630] Bin (2048): Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2019-06-04 17:25:17.167533: I tensorflow/core/common_runtime/bfc_allocator.cc:630] Bin (4096): Total Chunks: 2, Chunks in use: 2. 10.0KiB allocated for chunks. 10.0KiB in use in bin. 10.0KiB client-requested in use in bin.
2019-06-04 17:25:17.167669: I tensorflow/core/common_runtime/bfc_allocator.cc:630] Bin (8192): Total Chunks: 1, Chunks in use: 1. 8.0KiB allocated for chunks. 8.0KiB in use in bin. 8.0KiB client-requested in use in bin.
2019-06-04 17:25:17.167789: I tensorflow/core/common_runtime/bfc_allocator.cc:630] Bin (16384): Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2019-06-04 17:25:17.167926: I tensorflow/core/common_runtime/bfc_allocator.cc:630] Bin (32768): Total Chunks: 1, Chunks in use: 1. 32.0KiB allocated for chunks. 32.0KiB in use in bin. 32.0KiB client-requested in use in bin.
2019-06-04 17:25:17.168062: I tensorflow/core/common_runtime/bfc_allocator.cc:630] Bin (65536): Total Chunks: 1, Chunks in use: 1. 64.0KiB allocated for chunks. 64.0KiB in use in bin. 64.0KiB client-requested in use in bin.
2019-06-04 17:25:17.168204: I tensorflow/core/common_runtime/bfc_allocator.cc:630] Bin (131072): Total Chunks: 1, Chunks in use: 1. 136.2KiB allocated for chunks. 136.2KiB in use in bin. 136.1KiB client-requested in use in bin.
2019-06-04 17:25:17.168327: I tensorflow/core/common_runtime/bfc_allocator.cc:630] Bin (262144): Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2019-06-04 17:25:17.168453: I tensorflow/core/common_runtime/bfc_allocator.cc:630] Bin (524288): Total Chunks: 1, Chunks in use: 0. 751.2KiB allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2019-06-04 17:25:17.168584: I tensorflow/core/common_runtime/bfc_allocator.cc:630] Bin (1048576): Total Chunks: 1, Chunks in use: 1. 1.69MiB allocated for chunks. 1.69MiB in use in bin. 1.69MiB client-requested in use in bin.
2019-06-04 17:25:17.168711: I tensorflow/core/common_runtime/bfc_allocator.cc:630] Bin (2097152): Total Chunks: 3, Chunks in use: 2. 8.31MiB allocated for chunks. 4.53MiB in use in bin. 3.70MiB client-requested in use in bin.
2019-06-04 17:25:17.168840: I tensorflow/core/common_runtime/bfc_allocator.cc:630] Bin (4194304): Total Chunks: 1, Chunks in use: 1. 4.00MiB allocated for chunks. 4.00MiB in use in bin. 3.38MiB client-requested in use in bin.
2019-06-04 17:25:17.169113: I tensorflow/core/common_runtime/bfc_allocator.cc:630] Bin (8388608): Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2019-06-04 17:25:17.169238: I tensorflow/core/common_runtime/bfc_allocator.cc:630] Bin (16777216): Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2019-06-04 17:25:17.169362: I tensorflow/core/common_runtime/bfc_allocator.cc:630] Bin (33554432): Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2019-06-04 17:25:17.169481: I tensorflow/core/common_runtime/bfc_allocator.cc:630] Bin (67108864): Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2019-06-04 17:25:17.169598: I tensorflow/core/common_runtime/bfc_allocator.cc:630] Bin (134217728): Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2019-06-04 17:25:17.169715: I tensorflow/core/common_runtime/bfc_allocator.cc:630] Bin (268435456): Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2019-06-04 17:25:17.169849: I tensorflow/core/common_runtime/bfc_allocator.cc:646] Bin for 289.88MiB was 256.00MiB, Chunk State:
2019-06-04 17:25:17.169997: I tensorflow/core/common_runtime/bfc_allocator.cc:665] Chunk at 0xfc0e50000 of size 1280
2019-06-04 17:25:17.170117: I tensorflow/core/common_runtime/bfc_allocator.cc:665] Chunk at 0xfc0e50500 of size 256
2019-06-04 17:25:17.170223: I tensorflow/core/common_runtime/bfc_allocator.cc:665] Chunk at 0xfc0e50600 of size 256
2019-06-04 17:25:17.170337: I tensorflow/core/common_runtime/bfc_allocator.cc:665] Chunk at 0xfc0e50700 of size 512
2019-06-04 17:25:17.170442: I tensorflow/core/common_runtime/bfc_allocator.cc:665] Chunk at 0xfc0e50900 of size 256
2019-06-04 17:25:17.170545: I tensorflow/core/common_runtime/bfc_allocator.cc:665] Chunk at 0xfc0e50a00 of size 256
2019-06-04 17:25:17.170643: I tensorflow/core/common_runtime/bfc_allocator.cc:665] Chunk at 0xfc0e50b00 of size 256
2019-06-04 17:25:17.170743: I tensorflow/core/common_runtime/bfc_allocator.cc:665] Chunk at 0xfc0e50c00 of size 256
2019-06-04 17:25:17.170842: I tensorflow/core/common_runtime/bfc_allocator.cc:665] Chunk at 0xfc0e50d00 of size 256
2019-06-04 17:25:17.170944: I tensorflow/core/common_runtime/bfc_allocator.cc:665] Chunk at 0xfc0e50e00 of size 256
2019-06-04 17:25:17.171049: I tensorflow/core/common_runtime/bfc_allocator.cc:665] Chunk at 0xfc0e50f00 of size 1024
2019-06-04 17:25:17.171152: I tensorflow/core/common_runtime/bfc_allocator.cc:665] Chunk at 0xfc0e51300 of size 256
2019-06-04 17:25:17.171251: I tensorflow/core/common_runtime/bfc_allocator.cc:665] Chunk at 0xfc0e51400 of size 256
2019-06-04 17:25:17.171351: I tensorflow/core/common_runtime/bfc_allocator.cc:665] Chunk at 0xfc0e51500 of size 256
2019-06-04 17:25:17.171450: I tensorflow/core/common_runtime/bfc_allocator.cc:665] Chunk at 0xfc0e51600 of size 256
2019-06-04 17:25:17.171550: I tensorflow/core/common_runtime/bfc_allocator.cc:665] Chunk at 0xfc0e51700 of size 256
2019-06-04 17:25:17.171651: I tensorflow/core/common_runtime/bfc_allocator.cc:665] Chunk at 0xfc0e51800 of size 256
2019-06-04 17:25:17.171756: I tensorflow/core/common_runtime/bfc_allocator.cc:665] Chunk at 0xfc0e51900 of size 1536
2019-06-04 17:25:17.171856: I tensorflow/core/common_runtime/bfc_allocator.cc:665] Chunk at 0xfc0e51f00 of size 256
2019-06-04 17:25:17.171956: I tensorflow/core/common_runtime/bfc_allocator.cc:665] Chunk at 0xfc0e52000 of size 256
2019-06-04 17:25:17.172055: I tensorflow/core/common_runtime/bfc_allocator.cc:665] Chunk at 0xfc0e52100 of size 1536
2019-06-04 17:25:17.172154: I tensorflow/core/common_runtime/bfc_allocator.cc:665] Chunk at 0xfc0e52700 of size 256
2019-06-04 17:25:17.172257: I tensorflow/core/common_runtime/bfc_allocator.cc:665] Chunk at 0xfc0e52800 of size 256
2019-06-04 17:25:17.172357: I tensorflow/core/common_runtime/bfc_allocator.cc:665] Chunk at 0xfc0e52900 of size 1024
2019-06-04 17:25:17.172458: I tensorflow/core/common_runtime/bfc_allocator.cc:665] Chunk at 0xfc0e52d00 of size 256
2019-06-04 17:25:17.172558: I tensorflow/core/common_runtime/bfc_allocator.cc:665] Chunk at 0xfc0e52e00 of size 256
2019-06-04 17:25:17.172658: I tensorflow/core/common_runtime/bfc_allocator.cc:665] Chunk at 0xfc0e52f00 of size 256
2019-06-04 17:25:17.172759: I tensorflow/core/common_runtime/bfc_allocator.cc:665] Chunk at 0xfc0e53000 of size 256
2019-06-04 17:25:17.172910: I tensorflow/core/common_runtime/bfc_allocator.cc:665] Chunk at 0xfc0e53100 of size 256
2019-06-04 17:25:17.173026: I tensorflow/core/common_runtime/bfc_allocator.cc:665] Chunk at 0xfc0e53200 of size 256
2019-06-04 17:25:17.173137: I tensorflow/core/common_runtime/bfc_allocator.cc:665] Chunk at 0xfc0e53300 of size 4096
2019-06-04 17:25:17.173237: I tensorflow/core/common_runtime/bfc_allocator.cc:665] Chunk at 0xfc0e54300 of size 256
2019-06-04 17:25:17.173339: I tensorflow/core/common_runtime/bfc_allocator.cc:665] Chunk at 0xfc0e54400 of size 256
2019-06-04 17:25:17.173439: I tensorflow/core/common_runtime/bfc_allocator.cc:665] Chunk at 0xfc0e54500 of size 256
2019-06-04 17:25:17.173541: I tensorflow/core/common_runtime/bfc_allocator.cc:665] Chunk at 0xfc0e54600 of size 256
2019-06-04 17:25:17.173639: I tensorflow/core/common_runtime/bfc_allocator.cc:665] Chunk at 0xfc0e54700 of size 256
2019-06-04 17:25:17.173738: I tensorflow/core/common_runtime/bfc_allocator.cc:665] Chunk at 0xfc0e54800 of size 256
2019-06-04 17:25:17.173840: I tensorflow/core/common_runtime/bfc_allocator.cc:665] Chunk at 0xfc0e54900 of size 256
2019-06-04 17:25:17.173943: I tensorflow/core/common_runtime/bfc_allocator.cc:665] Chunk at 0xfc0e54a00 of size 8192
2019-06-04 17:25:17.174045: I tensorflow/core/common_runtime/bfc_allocator.cc:665] Chunk at 0xfc0e56a00 of size 256
2019-06-04 17:25:17.174144: I tensorflow/core/common_runtime/bfc_allocator.cc:665] Chunk at 0xfc0e56b00 of size 256
2019-06-04 17:25:17.174245: I tensorflow/core/common_runtime/bfc_allocator.cc:665] Chunk at 0xfc0e56c00 of size 256
2019-06-04 17:25:17.174343: I tensorflow/core/common_runtime/bfc_allocator.cc:665] Chunk at 0xfc0e56d00 of size 256
2019-06-04 17:25:17.174448: I tensorflow/core/common_runtime/bfc_allocator.cc:665] Chunk at 0xfc0e56e00 of size 139520
2019-06-04 17:25:17.174551: I tensorflow/core/common_runtime/bfc_allocator.cc:665] Chunk at 0xfc0e78f00 of size 512
2019-06-04 17:25:17.174656: I tensorflow/core/common_runtime/bfc_allocator.cc:665] Chunk at 0xfc0e79100 of size 6144
2019-06-04 17:25:17.174757: I tensorflow/core/common_runtime/bfc_allocator.cc:665] Chunk at 0xfc0e7a900 of size 256
2019-06-04 17:25:17.174857: I tensorflow/core/common_runtime/bfc_allocator.cc:665] Chunk at 0xfc0e7aa00 of size 256
2019-06-04 17:25:17.174957: I tensorflow/core/common_runtime/bfc_allocator.cc:665] Chunk at 0xfc0e7ab00 of size 1024
2019-06-04 17:25:17.175127: I tensorflow/core/common_runtime/bfc_allocator.cc:665] Chunk at 0xfc0e7af00 of size 32768
2019-06-04 17:25:17.175245: I tensorflow/core/common_runtime/bfc_allocator.cc:665] Chunk at 0xfc0e82f00 of size 256
2019-06-04 17:25:17.175364: I tensorflow/core/common_runtime/bfc_allocator.cc:665] Chunk at 0xfc0e83000 of size 256
2019-06-04 17:25:17.175473: I tensorflow/core/common_runtime/bfc_allocator.cc:665] Chunk at 0xfc0e83100 of size 1536
2019-06-04 17:25:17.175583: I tensorflow/core/common_runtime/bfc_allocator.cc:665] Chunk at 0xfc0e83700 of size 1536
2019-06-04 17:25:17.175687: I tensorflow/core/common_runtime/bfc_allocator.cc:665] Chunk at 0xfc0e83d00 of size 1024
2019-06-04 17:25:17.175794: I tensorflow/core/common_runtime/bfc_allocator.cc:665] Chunk at 0xfc0e84100 of size 65536
2019-06-04 17:25:17.175893: I tensorflow/core/common_runtime/bfc_allocator.cc:665] Chunk at 0xfc0e94100 of size 256
2019-06-04 17:25:17.175993: I tensorflow/core/common_runtime/bfc_allocator.cc:665] Chunk at 0xfc0e94200 of size 256
2019-06-04 17:25:17.176097: I tensorflow/core/common_runtime/bfc_allocator.cc:665] Free at 0xfc0e94300 of size 769280
2019-06-04 17:25:17.176202: I tensorflow/core/common_runtime/bfc_allocator.cc:665] Chunk at 0xfc1050000 of size 2097152
2019-06-04 17:25:17.176306: I tensorflow/core/common_runtime/bfc_allocator.cc:665] Chunk at 0xfc1250000 of size 4194304
2019-06-04 17:25:17.176413: I tensorflow/core/common_runtime/bfc_allocator.cc:665] Chunk at 0xfc1650000 of size 2654208
2019-06-04 17:25:17.176518: I tensorflow/core/common_runtime/bfc_allocator.cc:665] Chunk at 0xfc18d8000 of size 1769472
2019-06-04 17:25:17.176624: I tensorflow/core/common_runtime/bfc_allocator.cc:665] Free at 0xfc1a88000 of size 3964928
2019-06-04 17:25:17.176722: I tensorflow/core/common_runtime/bfc_allocator.cc:671] Summary of in-use Chunks by size:
2019-06-04 17:25:17.176856: I tensorflow/core/common_runtime/bfc_allocator.cc:674] 41 Chunks of size 256 totalling 10.2KiB
2019-06-04 17:25:17.177230: I tensorflow/core/common_runtime/bfc_allocator.cc:674] 2 Chunks of size 512 totalling 1.0KiB
2019-06-04 17:25:17.177425: I tensorflow/core/common_runtime/bfc_allocator.cc:674] 4 Chunks of size 1024 totalling 4.0KiB
2019-06-04 17:25:17.177564: I tensorflow/core/common_runtime/bfc_allocator.cc:674] 1 Chunks of size 1280 totalling 1.2KiB
2019-06-04 17:25:17.177724: I tensorflow/core/common_runtime/bfc_allocator.cc:674] 4 Chunks of size 1536 totalling 6.0KiB
2019-06-04 17:25:17.177828: I tensorflow/core/common_runtime/bfc_allocator.cc:674] 1 Chunks of size 4096 totalling 4.0KiB
2019-06-04 17:25:17.178023: I tensorflow/core/common_runtime/bfc_allocator.cc:674] 1 Chunks of size 6144 totalling 6.0KiB
2019-06-04 17:25:17.178106: I tensorflow/core/common_runtime/bfc_allocator.cc:674] 1 Chunks of size 8192 totalling 8.0KiB
2019-06-04 17:25:17.178197: I tensorflow/core/common_runtime/bfc_allocator.cc:674] 1 Chunks of size 32768 totalling 32.0KiB
2019-06-04 17:25:17.178299: I tensorflow/core/common_runtime/bfc_allocator.cc:674] 1 Chunks of size 65536 totalling 64.0KiB
2019-06-04 17:25:17.178419: I tensorflow/core/common_runtime/bfc_allocator.cc:674] 1 Chunks of size 139520 totalling 136.2KiB
2019-06-04 17:25:17.178502: I tensorflow/core/common_runtime/bfc_allocator.cc:674] 1 Chunks of size 1769472 totalling 1.69MiB
2019-06-04 17:25:17.178589: I tensorflow/core/common_runtime/bfc_allocator.cc:674] 1 Chunks of size 2097152 totalling 2.00MiB
2019-06-04 17:25:17.178677: I tensorflow/core/common_runtime/bfc_allocator.cc:674] 1 Chunks of size 2654208 totalling 2.53MiB
2019-06-04 17:25:17.178759: I tensorflow/core/common_runtime/bfc_allocator.cc:674] 1 Chunks of size 4194304 totalling 4.00MiB
2019-06-04 17:25:17.178850: I tensorflow/core/common_runtime/bfc_allocator.cc:678] Sum Total of in-use chunks: 10.48MiB
2019-06-04 17:25:17.178984: I tensorflow/core/common_runtime/bfc_allocator.cc:680] Stats:
Limit: 90476544
InUse: 10994432
MaxInUse: 10994432
NumAllocs: 62
MaxAllocSize: 4194304

2019-06-04 17:25:17.179160: W tensorflow/core/common_runtime/bfc_allocator.cc:279] ____xxxxxxxx*****************_________________________
2019-06-04 17:25:17.179613: W tensorflow/core/framework/op_kernel.cc:1318] OP_REQUIRES failed at random_op.cc:202 : Resource exhausted: OOM when allocating tensor with shape[74208,1024] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
Traceback (most recent call last):
File “image_demo.py”, line 18, in
tracker = re3_tracker.Re3Tracker()
File “/home/nvidia/preyas_work/re3-tensorflow-master/tracker/re3_tracker.py”, line 41, in init
self.sess.run(tf.global_variables_initializer())
File “/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py”, line 900, in run
run_metadata_ptr)
File “/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py”, line 1135, in _run
feed_dict_tensor, options, run_metadata)
File “/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py”, line 1316, in _do_run
run_metadata)
File “/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py”, line 1335, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[74208,1024] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[Node: re3/fc6/W_fc/Initializer/random_uniform/RandomUniform = RandomUniformT=DT_INT32, _class=[“loc:@re3/fc6/W_fc/Assign”], dtype=DT_FLOAT, seed=0, seed2=0, _device="/job:localhost/replica:0/task:0/device:GPU:0"]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

Caused by op u’re3/fc6/W_fc/Initializer/random_uniform/RandomUniform’, defined at:
File “image_demo.py”, line 18, in
tracker = re3_tracker.Re3Tracker()
File “/home/nvidia/preyas_work/re3-tensorflow-master/tracker/re3_tracker.py”, line 39, in init
prevLstmState=self.prevLstmState)
File “/home/nvidia/preyas_work/re3-tensorflow-master/tracker/network.py”, line 107, in inference
fc6_out = tf_util.fc_layer(conv_layers, 1024)
File “/home/nvidia/preyas_work/re3-tensorflow-master/re3_utils/tensorflow_util/tf_util.py”, line 102, in fc_layer
W_fc = get_variable(‘W_fc’, [input_channels, num_channels], initializer=weights_initializer, summary=summary)
File “/home/nvidia/preyas_work/re3-tensorflow-master/re3_utils/tensorflow_util/tf_util.py”, line 86, in get_variable
var = tf.get_variable(name, shape, dtype=dtype, initializer=initializer)
File “/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/variable_scope.py”, line 1328, in get_variable
constraint=constraint)
File “/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/variable_scope.py”, line 1090, in get_variable
constraint=constraint)
File “/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/variable_scope.py”, line 435, in get_variable
constraint=constraint)
File “/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/variable_scope.py”, line 404, in _true_getter
use_resource=use_resource, constraint=constraint)
File “/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/variable_scope.py”, line 796, in _get_single_variable
use_resource=use_resource)
File “/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/variable_scope.py”, line 2234, in variable
use_resource=use_resource)
File “/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/variable_scope.py”, line 2224, in
previous_getter = lambda **kwargs: default_variable_creator(None, **kwargs)
File “/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/variable_scope.py”, line 2207, in default_variable_creator
constraint=constraint)
File “/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/variables.py”, line 259, in init
constraint=constraint)
File “/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/variables.py”, line 368, in _init_from_args
initial_value(), name=“initial_value”, dtype=dtype)
File “/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/variable_scope.py”, line 780, in
shape.as_list(), dtype=dtype, partition_info=partition_info)
File “/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/layers/python/layers/initializers.py”, line 145, in _initializer
dtype, seed=seed)
File “/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/random_ops.py”, line 242, in random_uniform
rnd = gen_random_ops.random_uniform(shape, dtype, seed=seed1, seed2=seed2)
File “/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_random_ops.py”, line 674, in random_uniform
name=name)
File “/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/op_def_library.py”, line 787, in _apply_op_helper
op_def=op_def)
File “/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py”, line 3414, in create_op
op_def=op_def)
File “/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py”, line 1740, in init
self._traceback = self._graph._extract_stack() # pylint: disable=protected-access

ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[74208,1024] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[Node: re3/fc6/W_fc/Initializer/random_uniform/RandomUniform = RandomUniformT=DT_INT32, _class=[“loc:@re3/fc6/W_fc/Assign”], dtype=DT_FLOAT, seed=0, seed2=0, _device="/job:localhost/replica:0/task:0/device:GPU:0"]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

Hi,

The document you shared is for JetPack3.3 but we have a newer version for TX2 now.

Would you mind to reflash your device with JetPack4.2?
And install TensorFlow from this comment to try it again?
https://devtalk.nvidia.com/default/topic/1038957/jetson-tx2/tensorflow-for-jetson-tx2-/

Thanks.

Thanks for the reply.
after restarting the jetson everything works fine, i guess there was some lock on the memory or something else.
As far as i know jetpack 4.2 isn’t very stable yet so I prefer to stick to 3.3.

Hi,

It’s recommended to use JetPack4.2.
Lots of libraries are upgraded.

Thanks.