Aborted core dumped error! When running TensorRT code on Jetson Nano 2GB

Hi,

Hardware - Jetson Nano 2GB
Swap - 5.9GB
System dependencies - Python 3.6
OpenCV 4.x.x +
Tensorflow 2.x +
TensorRT 7.x
Jetpack 4.5

Repository - GitHub - theAIGuysCode/yolov4-deepsort: Object tracking implemented with YOLOv4, DeepSort, and TensorFlow.

Problem -
We are trying to run the code from the above repository, which is slightly modified to work with TensorRT. But we are getting the following memory dumped issue.

While the code is running the ram is getting occupied up to 1.9GB and the SWAP is getting occupied 900MB, after reaching this limit we are getting the memory dumped issue. There is around 4GB of SWAP empty in the system but its not getting occupied by the code while running.

The standalone code without TensorRT is working just fine with FPS=1.2.
It would be great if someone guides us to solve this memory issue.

These are the error messages we are getting -

:81] Allocation of 9437184 exceeds 10% of free system memory.
2021-06-03 12:20:31.707614: W tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 4718592 exceeds 10% of free system memory.
2021-06-03 12:20:45.741255: W tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 9437184 exceeds 10% of free system memory.
2021-06-03 12:20:46.041327: W tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 4718592 exceeds 10% of free system memory

Allocator (GPU_0_bfc) ran out of memory trying to allocate 16.25MiB with freed_by_count=0. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2021-06-03 12:22:21.984029: W tensorflow/core/common_runtime/bfc_allocator.cc:246] Allocator (GPU_0_bfc) ran out of memory trying to allocate 20.46MiB with freed_by_count=0. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2021-06-03 12:22:22.124532: W tensorflow/core/common_runtime/bfc_allocator.cc:246] Allocator (GPU_0_bfc) ran out of memory trying to allocate 16.69MiB with freed_by_count=0. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2021-06-03 12:22:22.715597: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.10
2021-06-03 12:22:26.067826: W tensorflow/core/common_runtime/bfc_allocator.cc:246] Allocator (GPU_0_bfc) ran out of memory trying to allocate 76.29MiB with freed_by_count=0. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.

p.cc:629] TF-TRT Warning: Engine retrieval for input shapes: [[1,13,13,512]] failed. Running native segment for PartitionedCall/StatefulPartitionedCall/functional_1/TRTEngineOp_0_10
2021-06-03 12:22:34.485469: F tensorflow/core/kernels/resize_bilinear_op_gpu.cu.cc:493] Non-OK-status: GpuLaunchKernel(kernel, config.block_count, config.thread_per_block, 0, d.stream(), config.virtual_thread_count, images.data(), height_scale, width_scale, batch, in_height, in_width, channels, out_height, out_width, output.data()) status: Internal: too many resources requested for launch
Aborted (core dumped)

Thanks


Hi,

Please noted that swap cannot increate GPU memory.
However, TensorRT need GPU memory for inference.

Since Nano 2GB is memory-limited, it’s recommended to try some light weight model (ex. YOLOv4 tiny) instead.

Thanks.

Hi,

We tried using tiny weights but getting the same error.

Thanks

There is no update from you for a period, assuming this is not an issue any more.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

Hi,

Could you monitor the system memory at the same time.

$ sudo tegrastats

And share the output log with us?

Thanks.