TensorRT 3.0.4 image classification example with JetPack 3.2 on Jetson TX2 giving ResourceExhaustedError

Hi.

I was rtying to run TensorRT image classification example on Jetson TX2 using JetPack3.2. I followed the steps mentioned in
https://github.com/NVIDIA-Jetson/tf_to_trt_image_classification/

At the point where models need to be converted into frozen graphs by executing
$ python scripts/models_to_frozen_graphs.py
on Jetson TX2, I am getting OOM error

2018-03-14 15:11:45.343913: W tensorflow/core/common_runtime/bfc_allocator.cc:273] Allocator (GPU_0_bfc) ran out of memory trying to allocate 1.27MiB. Current allocation summary follows.

What could be the issue? Any easy way to debug and/or solve it?

Best wishes,
Prabhat

Hi,

We are checking this issue and will update information to you later.
Since TensorFlow requires lots of resource, do you run other applications at the same time?

Thanks.

Hi,

We can successfully execute models_to_frozen_graphs.py script if reboot after installing.

We found that few memory can be accessed by TensorFlow right after installation.
Please reboot your system to avoid this status.

Thanks and please let us know the results.

Thanks a lot for your quick reply.

Indeed the out of memory issue got resolved after rebooting the board.

Best wishes,
Prabhat