tracking anchors tracking anchors tracking anchors tracking anchors tracking anchors ... ... ... 2020-06-22 10:56:58.136909: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.2 2020-06-22 10:57:01.940878: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libnvinfer.so.7 2020-06-22 10:57:01.943776: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libnvinfer_plugin.so.7 WARNING:tensorflow:From /home/nano/.virtualenvs/cv4tf2/lib/python3.6/site-packages/tensorflow_core/python/ops/resource_variable_ops.py:1635: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version. Instructions for updating: If using Keras pass *_constraint arguments to layers. WARNING:tensorflow:From /home/nano/.virtualenvs/cv4tf2/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:4070: The name tf.nn.max_pool is deprecated. Please use tf.nn.max_pool2d instead. WARNING:tensorflow:From /home/nano/.virtualenvs/cv4tf2/lib/python3.6/site-packages/keras_retinanet-0.5.1-py3.6-linux-aarch64.egg/keras_retinanet/backend/tensorflow_backend.py:104: The name tf.where is deprecated. Please use tf.compat.v1.where instead. 2020-06-22 10:57:22.641566: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcuda.so.1 2020-06-22 10:57:22.650423: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:948] ARM64 does not support NUMA - returning NUMA node zero 2020-06-22 10:57:22.650610: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1555] Found device 0 with properties: pciBusID: 0000:00:00.0 name: NVIDIA Tegra X1 computeCapability: 5.3 coreClock: 0.9216GHz coreCount: 1 deviceMemorySize: 3.86GiB deviceMemoryBandwidth: 194.55MiB/s 2020-06-22 10:57:22.650764: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.2 2020-06-22 10:57:22.651019: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10 2020-06-22 10:57:22.670218: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcufft.so.10 2020-06-22 10:57:22.709124: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcurand.so.10 2020-06-22 10:57:22.727666: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusolver.so.10 2020-06-22 10:57:22.746513: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusparse.so.10 2020-06-22 10:57:22.746987: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.8 2020-06-22 10:57:22.747299: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:948] ARM64 does not support NUMA - returning NUMA node zero 2020-06-22 10:57:22.747602: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:948] ARM64 does not support NUMA - returning NUMA node zero 2020-06-22 10:57:22.747705: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1697] Adding visible gpu devices: 0 2020-06-22 10:57:22.770130: W tensorflow/core/platform/profile_utils/cpu_utils.cc:98] Failed to find bogomips in /proc/cpuinfo; cannot determine CPU frequency 2020-06-22 10:57:22.770751: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0xca08fa0 initialized for platform Host (this does not guarantee that XLA will be used). Devices: 2020-06-22 10:57:22.770811: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version 2020-06-22 10:57:22.846759: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:948] ARM64 does not support NUMA - returning NUMA node zero 2020-06-22 10:57:22.847134: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0xce97480 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices: 2020-06-22 10:57:22.847200: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): NVIDIA Tegra X1, Compute Capability 5.3 2020-06-22 10:57:22.847635: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:948] ARM64 does not support NUMA - returning NUMA node zero 2020-06-22 10:57:22.847760: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1555] Found device 0 with properties: pciBusID: 0000:00:00.0 name: NVIDIA Tegra X1 computeCapability: 5.3 coreClock: 0.9216GHz coreCount: 1 deviceMemorySize: 3.86GiB deviceMemoryBandwidth: 194.55MiB/s 2020-06-22 10:57:22.847963: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.2 2020-06-22 10:57:22.848064: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10 2020-06-22 10:57:22.848156: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcufft.so.10 2020-06-22 10:57:22.848237: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcurand.so.10 2020-06-22 10:57:22.848315: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusolver.so.10 2020-06-22 10:57:22.848388: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusparse.so.10 2020-06-22 10:57:22.848459: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.8 2020-06-22 10:57:22.848666: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:948] ARM64 does not support NUMA - returning NUMA node zero 2020-06-22 10:57:22.848896: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:948] ARM64 does not support NUMA - returning NUMA node zero 2020-06-22 10:57:22.848971: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1697] Adding visible gpu devices: 0 2020-06-22 10:57:22.849098: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.2 2020-06-22 10:57:27.773919: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1096] Device interconnect StreamExecutor with strength 1 edge matrix: 2020-06-22 10:57:27.774115: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] 0 2020-06-22 10:57:27.774155: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] 0: N 2020-06-22 10:57:27.774729: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:948] ARM64 does not support NUMA - returning NUMA node zero 2020-06-22 10:57:27.775181: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:948] ARM64 does not support NUMA - returning NUMA node zero 2020-06-22 10:57:27.775424: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1241] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 144 MB memory) -> physical GPU (device: 0, name: NVIDIA Tegra X1, pci bus id: 0000:00:00.0, compute capability: 5.3) ... ... ... 020-06-22 10:58:52.688326: I tensorflow/core/common_runtime/bfc_allocator.cc:964] total_region_allocated_bytes_: 151228416 memory_limit_: 151228416 available bytes: 0 curr_region_allocation_bytes_: 302456832 2020-06-22 10:58:52.688390: I tensorflow/core/common_runtime/bfc_allocator.cc:970] Stats: Limit: 151228416 InUse: 151228416 MaxInUse: 151228416 NumAllocs: 104 MaxAllocSize: 28531712 2020-06-22 10:58:52.688570: W tensorflow/core/common_runtime/bfc_allocator.cc:429] **********************************************************************************************xxxxxx 2020-06-22 10:58:52.688643: W tensorflow/core/framework/op_kernel.cc:1655] OP_REQUIRES failed at random_op.cc:76 : Resource exhausted: OOM when allocating tensor with shape[1,1,512,128] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc 2020-06-22 10:59:02.688926: W tensorflow/core/common_runtime/bfc_allocator.cc:424] Allocator (GPU_0_bfc) ran out of memory trying to allocate 256.0KiB (rounded to 262144). Current allocation summary follows. 2020-06-22 10:59:02.689019: I tensorflow/core/common_runtime/bfc_allocator.cc:894] BFCAllocator dump for GPU_0_bfc 2020-06-22 10:59:02.689065: I tensorflow/core/common_runtime/bfc_allocator.cc:901] Bin (256): Total Chunks: 32, Chunks in use: 32. 8.0KiB allocated for chunks. 8.0KiB in use in bin. 1.5KiB client-requested in use in bin. 2020-06-22 10:59:02.689133: I tensorflow/core/common_runtime/bfc_allocator.cc:901] Bin (512): Total Chunks: 2, Chunks in use: 2. 1.0KiB allocated for chunks. 1.0KiB in use in bin. 1.0KiB client-requested in use in bin. 2020-06-22 10:59:02.689191: I tensorflow/core/common_runtime/bfc_allocator.cc:901] Bin (1024): Total Chunks: 3, Chunks in use: 3. 3.2KiB allocated for chunks. 3.2KiB in use in bin. 3.0KiB client-requested in use in bin. 2020-06-22 10:59:02.689231: I tensorflow/core/common_runtime/bfc_allocator.cc:901] Bin (2048): Total Chunks: 2, Chunks in use: 2. 4.0KiB allocated for chunks. 4.0KiB in use in bin. 4.0KiB client-requested in use in bin. 2020-06-22 10:59:02.689273: I tensorflow/core/common_runtime/bfc_allocator.cc:901] Bin (4096): Total Chunks: 2, Chunks in use: 2. 8.0KiB allocated for chunks. 8.0KiB in use in bin. 8.0KiB client-requested in use in bin. 2020-06-22 10:59:02.689312: I tensorflow/core/common_runtime/bfc_allocator.cc:901] Bin (8192): Total Chunks: 2, Chunks in use: 2. 16.0KiB allocated for chunks. 16.0KiB in use in bin. 16.0KiB client-requested in use in bin. 2020-06-22 10:59:02.689353: I tensorflow/core/common_runtime/bfc_allocator.cc:901] Bin (16384): Total Chunks: 1, Chunks in use: 1. 16.0KiB allocated for chunks. 16.0KiB in use in bin. 16.0KiB client-requested in use in bin. 2020-06-22 10:59:02.689394: I tensorflow/core/common_runtime/bfc_allocator.cc:901] Bin (32768): Total Chunks: 1, Chunks in use: 1. 36.8KiB allocated for chunks. 36.8KiB in use in bin. 36.8KiB client-requested in use in bin. 2020-06-22 10:59:02.689433: I tensorflow/core/common_runtime/bfc_allocator.cc:901] Bin (65536): Total Chunks: 2, Chunks in use: 2. 128.0KiB allocated for chunks. 128.0KiB in use in bin. 128.0KiB client-requested in use in bin. 2020-06-22 10:59:02.689474: I tensorflow/core/common_runtime/bfc_allocator.cc:901] Bin (131072): Total Chunks: 4, Chunks in use: 4. 560.0KiB allocated for chunks. 560.0KiB in use in bin. 560.0KiB client-requested in use in bin. 2020-06-22 10:59:02.689511: I tensorflow/core/common_runtime/bfc_allocator.cc:901] Bin (262144): Total Chunks: 4, Chunks in use: 4. 1.00MiB allocated for chunks. 1.00MiB in use in bin. 1.00MiB client-requested in use in bin. 2020-06-22 10:59:02.689547: I tensorflow/core/common_runtime/bfc_allocator.cc:901] Bin (524288): Total Chunks: 7, Chunks in use: 7. 3.75MiB allocated for chunks. 3.75MiB in use in bin. 3.75MiB client-requested in use in bin. 2020-06-22 10:59:02.689590: I tensorflow/core/common_runtime/bfc_allocator.cc:901] Bin (1048576): Total Chunks: 12, Chunks in use: 12. 12.00MiB allocated for chunks. 12.00MiB in use in bin. 12.00MiB client-requested in use in bin. ... ... ... Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/nano/Desktop/ObjectDetection_Retinanet.py", line 65, in model = models.load_model(args["model"], backbone_name='resnet50') File "/home/nano/.virtualenvs/cv4tf2/lib/python3.6/site-packages/keras_retinanet-0.5.1-py3.6-linux-aarch64.egg/keras_retinanet/models/__init__.py", line 87, in load_model return keras.models.load_model(filepath, custom_objects=backbone(backbone_name).custom_objects) File "/home/nano/.virtualenvs/cv4tf2/lib/python3.6/site-packages/keras/engine/saving.py", line 492, in load_wrapper return load_function(*args, **kwargs) File "/home/nano/.virtualenvs/cv4tf2/lib/python3.6/site-packages/keras/engine/saving.py", line 584, in load_model model = _deserialize_model(h5dict, custom_objects, compile) File "/home/nano/.virtualenvs/cv4tf2/lib/python3.6/site-packages/keras/engine/saving.py", line 336, in _deserialize_model K.batch_set_value(weight_value_tuples) File "/home/nano/.virtualenvs/cv4tf2/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py", line 2960, in batch_set_value tf_keras_backend.batch_set_value(tuples) File "/home/nano/.virtualenvs/cv4tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/backend.py", line 3348, in batch_set_value get_session().run(assign_ops, feed_dict=feed_dict) File "/home/nano/.virtualenvs/cv4tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/backend.py", line 496, in get_session _initialize_variables(session) File "/home/nano/.virtualenvs/cv4tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/backend.py", line 918, in _initialize_variables session.run(variables_module.variables_initializer(uninitialized_vars)) File "/home/nano/.virtualenvs/cv4tf2/lib/python3.6/site-packages/tensorflow_core/python/client/session.py", line 960, in run run_metadata_ptr) File "/home/nano/.virtualenvs/cv4tf2/lib/python3.6/site-packages/tensorflow_core/python/client/session.py", line 1183, in _run feed_dict_tensor, options, run_metadata) File "/home/nano/.virtualenvs/cv4tf2/lib/python3.6/site-packages/tensorflow_core/python/client/session.py", line 1361, in _do_run run_metadata) File "/home/nano/.virtualenvs/cv4tf2/lib/python3.6/site-packages/tensorflow_core/python/client/session.py", line 1386, in _do_call raise type(e)(node_def, op, message) tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[3,3,256,36] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc [[node pyramid_regression/random_normal/RandomStandardNormal (defined at /.virtualenvs/cv4tf2/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:4329) ]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.