Hi.
I was rtying to run TensorRT image classification example on Jetson TX2 using JetPack3.2. I followed the steps mentioned in
https://github.com/NVIDIA-Jetson/tf_to_trt_image_classification/
At the point where models need to be converted into frozen graphs by executing
$ python scripts/models_to_frozen_graphs.py
on Jetson TX2, I am getting OOM error
…
2018-03-14 15:11:45.343913: W tensorflow/core/common_runtime/bfc_allocator.cc:273] Allocator (GPU_0_bfc) ran out of memory trying to allocate 1.27MiB. Current allocation summary follows.
…
What could be the issue? Any easy way to debug and/or solve it?
Best wishes,
Prabhat